Sponsered Links

Monday, September 29, 2008

Performance testing in the production environment

Question - Could you please tell me how to measure the performance test environment to the production environment? I mean scaling performance environment to the production environment? What factors should we have in mind for the performance environment?

Ans: It can be both inaccurate and dangerous to compare performance results obtained in the test environment to the production environment. The two most likely differences in the environments are system architecture and volume of data. Other differences might include: class of machines (app and Web servers), load balancers, report servers, and network configurations. This is why making a comparison can be inaccurate. It can be dangerous if production planning is made from performance transaction timings obtained in an environment that might be very different

Review the system diagram for both environments and see if there are additional differences you can identify. Be clear to communicate these differences if anyone suggests using results from the test environment to imply the performance timing results would be the same in production.

I advocate testing in production whenever possible. In order to execute performance tests in production, I've typically worked in the middle of the night -- from 2am to 5am, for example -- while a production outage is taken. I've worked middle of the night on holiday weekends in order to gain test time in production as well. If you can't execute tests in production and you are left to execute performance tests in the test environment, then I recommend learning the performance behavior from your test environment and then communicating test results in terms of performance characteristics versus transaction timings.

Performance characteristics might include knowledge of CPU usage or performance degradations. For instance, you might be able to discover that the report performance exceeds the acceptable range defined when generating a report with some specified amount of data (such as 2 months of accounting numbers). Or you might learn search performance begins to degrade when X number of users are logged into the system and X numbers of users are executing searches at the same time. You can look for high level information and learn overall performance characteristics that can be helpful but won't provide performance timings that should be used to presume production will behave in the same way.
BY Performance characteristics might include knowledge of CPU usage or performance degradations. For instance, you might be able to discover that the report performance exceeds the acceptable range defined when generating a report with some specified amount of data (such as 2 months of accounting numbers). Or you might learn search performance begins to degrade when X number of users are logged into the system and X numbers of users are executing searches at the same time. You can look for high level information and learn overall performance characteristics that can be helpful but won't provide performance timings that should be used to presume production will behave in the same way.

BY : Karen N. Johnson

Wednesday, September 24, 2008

Testing Mobile Phone Applications......continued..

Find Ways to Understand and Simplify Problems
I have found diagnostic client software and diagnostic Web servers particularly useful for discovering and debugging issues with transcoders. Both the client and the server are designed to report the information that is sent and received. Find out whether the content is expected to be transcoded, and if so, how. If not, the data sent by one end should be received unchanged at the destination and vice-versa. The diagnostic software recorded all the data and made problems easier to detect.
Use Complimentary Tools
Find complimentary ways to test using Web browsers for Web-based mobile sites. Firefox has numerous free plug-ins that emulate a phone’s Web browser and make manual testing easier. I use the following: WMLBrowser, Web Developer, User Agent Switcher, and Modify Headers.
Reduce the Number of Combinations
As there are thousands of permutations of phones and carriers, pick an exemplary subset of phones to test with. For instance, when testing Java software (written in Java 2 Micro Edition), I test on classes of phones that include: Nokia Series 60 second and third editions; Sony Ericsson’s Java Platform 6, 7, and 8 phones; and BlackBerry models based on the keyboard layout and operating system version. Pick popular phones and phones with large and small screens and a variety of keyboards, including: T9 (where the alphabet is split across the numeric keys 2 to 9), QWERY, and other unusual keyboard layouts. Over time you may collect "interesting" phones that help expose application flaws. For example, one of my phone's core software has been highly customized by the carrier and has exposed limitations in applications that appear very quickly. By finding and reporting these issues early, the developers were able to revise their application software so it was much more flexible and robust.

Here's a site that details another way to classify your phones based on the operating system and UI: Using a Device Hierarchy.
By Julian Harty

Testing Mobile Phone Applications

Summary:
It took eighteen months for Julian Harty to overcome the various challenges of testing mobile wireless applications. In turn, he has learned some valuable lessons that he wants to share with you in this week's column.

Eighteen months ago, I started learning about the joys and challenges of testing mobile wireless applications. This article is dedicated to the various tips and tricks I've collected along the way that may help you become productive much more quickly.

Reduce Setup Time ;
Find ways to reduce the time required to configure the phone, install the software, and learn about the underlying connectivity. For example:
Your carrier or handset manufacturer may enable you to download the Internet settings to your phone rather than trying to discover and then manually key in the obscure settings.
Often the software needs to be installed from a Web site. Use text messages to send long Web addresses. Keying a URL can take several minutes and one false move may mean starting again!
Learn how to use a computer to install the software. Many manufacturers provide free software that will enable you to add and remove software applications relatively painlessly from a computer.
Figure Out Connectivity
Mobile connectivity remains a challenge. But remember, a connection relies on at least four elements:
1. Configuration of the phone
2. The service provided by the carrier (and paid for by the user)
3. The connectivity between the carrier’s wireless network and the Internet (where gateways can filter, modify, convert, or even block communications for various reasons)
4. And the rest of the connection to the Web/application server, which may include more gateways, firewalls, etc.
Understand Your Data Plan:
Carriers may offer a range of data services, from very limited access to a small list of approved Web sites (called a walled garden in the industry) to full "Internet" access that may even allow Voice over IP, video streaming, etc. Some carriers provide clear information on which services are available for each price plan; for others, you may have to research what services and Web addresses work reliably. Check how much you pay for data before embarking on data-intensive applications. I had a monthly data bill that was more than $300—even though I didn't use any of the installed applications on my phone during that time. However, one of the applications polled its server in the background while I was abroad. At $16/MB transferred, it was an expensive lesson to learn!
To be continued........

Tuesday, September 23, 2008

Software testing in a virtual environment

Q- What is the likelihood of capturing accurate load testing results in a virtual test environment? We use LoadRunner/PerformanceCenter for performance testing. Our company is in the process of making use of virtualization. It seems this may be ideal for functional test environments, but not for performance test environment. What is your opinion?

A- There are a lot of ways to use virtual environments in your performance testing, so there's no easy answer to this question. I'm assuming that you're referring to hosting the entire application in a virtual environment and running your performance testing against that platform. My answer is that, as always, it depends.
Some research on the topic has found that virtual environments don't scale as well as non-virtual environments. In a study by BlueLock, a company that provides IT infrastructure as a service, they found that "the number of simultaneous users that could be handled by the virtualized server was 14% lower than the number of simultaneous users being handled by the traditional server configuration."
This is consistent with my experience testing financial service applications in virtual environments. If you don't have much choice, or if you have a lot of pressure to make it work, I would recommend that you perform a comparison performance test to prove out the new platform. If you can do that successfully, you'll have some confidence that the platform is comparable. But just be aware that over time, as the application changes and the server configurations change (both virtual and the physical servers in production) your comparison will become outdated. It may happen faster than you might think.
By Mike Kelly

Sunday, September 21, 2008

Prioritizing software testing on little time

Q- Suppose I am testing any Web application and time period is very short and we have a heap of test cases. Which test case should we use first, which can make our Web site more secure and reliable?
Expert’s Response- This question pops up in various forms all the time. It boils down to "We don't have enough time to test everything, so what do we test?" Not having enough time, of course, is not only the status quo for testing software, it is a universal truth for any software that will ever go into production.
Given that, here's my advice.
Start by forgetting that you have any test cases at all.
Make a list (quickly -- remember we don't have enough time to test, so let's not waste what little time we have making lists) of each of the following usage scenarios. I usually limit myself to five on the first pass, but no matter what, move on to the next category as soon as you find yourself thinking about the category you are on. If you have to stop and think, whatever you come up with isn't important enough.
What things will users do most often with this application?
What areas of this application are most likely to contain show-stopping defects?
What parts of this application are critical to the business?
Are any parts of this application governed by legal or regulatory agencies?
What parts of the application would be most embarrassing to the company if broken?
What parts of the application has my boss said must be tested?
Prioritize the list. If you've made the list in a word processor or using note cards, this will take under 60 seconds (if you have to write a new list by hand and you write as slowly as I do, it will probably take a little longer. Here are the rules for prioritizing.
Count the number of times a scenario appears in any of your categories. The more times the scenario appears, the higher the priority.
In case of a tie, 'a' comes before 'b' comes before 'c,' etc.
Now scan your test cases. Note which ones are covered and which ones aren't. On the ones that aren't covered, ask yourself, "Can I live with not testing this?" If the answer is no, add it to the bottom of the list.
Start testing.
If you complete these tests before time is up, do the same exercise again without repeating any usage scenarios. If not, at least you have a defensible list of what you did and did not test and lost all of about 15 minutes of testing time creating that list.
In case you're wondering, this approach is derived from my FIBLOTS heuristic for deciding what usage scenarios to include when developing performance tests. FIBLOTS is an acronym representing the words that complete the sentence "Ensure your performance tests include usage scenarios that are:
Frequent
Intensive
Business critical
Legally enforceable
Obvious
Technically risky
Stakeholder mandated."
I guess for functional testing, it would be "Ensure you test usage scenarios that are:
Frequent
Risky
Business critical
Legally enforceable
Obvious
Stakeholder mandated."
Too bad the acronym FRBLOS isn't as easy to remember as FIBLOTS.

By-Scott Barber

V Model


A framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.

White Box Testing

White box testing deals with the internal logic and structure of the code. White box testing is also called as glass, structural, open box or clear box testing.
The tests written based on the white box testing strategy incorporate ->
Code coverage, branches, paths, statements and internal logic of the code etc. In order to implement white box testing, the tester has to deal with the code and hence is needed to possess knowledge of coding and logic i.e. internal working of the code.
White box test also needs the tester to look into the code and find out which unit/statement/chunk of the code is malfunctioning.

Advantages of White box testing are:

i) As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data can help in testing the application effectively. ii) The other advantage of white box testing is that it helps in optimizing the code iii) It helps in removing the extra lines of code, which can bring in hidden defects.

Disadvantages of white box testing are:

i) As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, which increases the cost. ii) And it is nearly impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of the application.

Types of testing under White/Glass Box Testing.

Unit Testing:

The developer carries out unit testing in order to check if the particular module or unit of code is working fine. The Unit Testing comes at the very basic level as it is carried out as and when the unit of the code is developed or a particular functionality is built.

Static and dynamic Analysis: Static analysis involves going through the code in order to find out any possible defect in the code. Dynamic analysis involves executing the code and analyzing the output.

Statement Coverage: In this type of testing the code is executed in such a manner that every statement of the application is executed at least once. It helps in assuring that all the statements execute without any side effect.

Branch Coverage:

No software application can be written in a continuous mode of coding, at some point we need to branch out the code in order to perform a particular functionality. Branch coverage testing helps in validating of all the branches in the code and making sure that no branching leads to abnormal behavior of the application.

Security Testing:

Security Testing is carried out in order to find out how well the system can protect itself from unauthorized access, hacking – cracking, any code damage etc. which deals with the code of application. This type of testing needs sophisticated testing techniques.

Mutation Testing: A kind of testing in which, the application is tested for the code that was modified after fixing a particular bug/defect. It also helps in finding out which code and which strategy of coding can help in developing the functionality effectively.

Black box Testing

Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing is based on requirements and functionality. And is being tested to derive test cases from the specification.
The system is a black-box whose behavior can only be determined by studying its inputs and the related outputs
Black Box Testing is testing without knowledge of the internal workings of the item being tested. For example, when black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the expected outputs should be, but not how the program actually arrives at those outputs. It is because of this that black box testing can be considered testing with respect to the specifications, no other knowledge of the program (code) is necessary.

Advantages of Black Box Testing
more effective on larger units of code than glass box testing
tester needs no knowledge of implementation, including specific programming languages
tester and programmer are independent of each other
tests are done from a user's point of view
will help to expose any ambiguities or inconsistencies in the specifications
test cases can be designed as soon as the specifications are complete
Disadvantages of Black Box Testing
only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever
without clear and concise specifications, test cases are hard to design
there may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried
may leave many program paths untested
cannot be directed toward specific segments of code which may be very complex (and therefore more error prone)
most testing related research has been directed toward glass box testing.

Manual vs. automated penetration testing

I have a vague understanding of the differences between manual and automated penetration testing, but I don't know which method to use. Are the automated pen tests trustworthy? Should I use both methods?

You should absolutely use both methods, by beginning with automated penetration testing, and supplementing that with manual penetration testing.Automated penetration testing tools tend to be more efficient and thorough, and chances are that malicious hackers are going to use automated attacks against you. These automated test tools come from many sources, including commercial, open-source and custom designed. Often these tools focus on a particular vulnerability area, so multiple penetration testing tools may be needed. Because these automated tools are updated monthly or weekly, you must manually verify the output from the automated tools to check for false alarms and to test for the latest vulnerabilities. With over 50 new vulnerabilities being discovered each week, there will always be new vulnerabilities that the tools may not be able to detect. Without doing this manual testing, your penetration testing will be incomplete.

What is penetration testing

penetration testing::

Penetration testing is the security-oriented probing of a computer system or network to seek out vulnerabilities that an attacker could exploit. The testing process involves an exploration of the all security features of the system in question, followed by an attempt to breech security and penetrate the system. The tester, sometimes known as an ethical hacker, generally uses the same methods and tools as a real attacker. Afterwards, the penetration testers report on the vulnerabilities and suggest steps that should be taken to make the system more secure.

In his article "Knockin' At Your Backdoor," security expert Thomas Rude lists some of the system components that an ethical hacker might explore: areas that could be compromised in the demilitarized zone (DMZ); the possibility of getting into the intranet; the PBX (the enterprise's internal telephone system); and the database. According to Rude, this is far from an exhaustive list, however, because the main criterion for testing is value: if an element of your system is worthy of safe-keeping, its security should be tested regularly.

By:-Sunil Tadwalkar(PMP)

Friday, September 19, 2008

Integration testing: steps toward SOA quality

Integration testing: The process of testing integrated servicesIntegration testing of a service should be performed by the QA/testing team and reviewed by both the architecture lead and development leads. At a minimum integration testing should consist of both verification of the integrated architectural design as it relates to the service under test and validation of integrated services. For each service this would consist of testing the functionality of the service and its relationship with all immediate (directly connected) services.
In our example, integration testing of the cart service would involve testing the cart service functionality and the integration of that service to the catalogue service, customer history service, digital fulfillment service, and the Web-enabled presentation layer. The purpose is to discover and address discrepancies between the functional specification and implementation of the cart service and its contractual responsibilities with other (immediate) services. Once again, this is especially important when implementing SOA.
The integration testing effort should focus on the service undergoing integration testing and its contractual responsibilities with other (immediate) services. There are several reasons for taking this approach, not the least of which is that integration testing of SOA solutions is extremely challenging -- the wider the scope of the integration testing effort, the more challenging it becomes. It useful to focus on the immediate landscape to ensure the contractual obligations are being met by each service and then extend the scope of testing during functional testing. The basic premise is to treat the services as building blocks that compose/support a particular business event or part of an event.
There are several automated SOA testing tools available (commercial and shareware) that help address the testing of services, and there are more traditional testing tools that can be tooled to address SOA testing. Many are able to capture service descriptions and create initial tests based on these descriptions. Those tests can then be automated.

Once you've completed integration testing of closely related services, you can begin true functional testing. This is where the real challenges of testing SOA solutions come to bear and involve the following:
· Third-party services
· Late binding (selection of service)
· Missing/incomplete/changing services
· Multi-platform/Multi-language distributed services

Unit, integration testing first steps toward SOA quality

Unit, integration testing first steps toward SOA quality

SOA -- Unit & integration testing
SOA promotes reuse at the service level rather than at the code/objects level. If you think of each component truly as a service, then there are internal aspects (data and process) and external facing (interface) aspects of the service that need to be tested.
It is convenient to think of the internal aspects of the service in terms of unit testing and to think of testing interface relationships with immediate service partners in terms of integration. It should be noted that unit and integration testing is often ignored or given minimal attention in traditional development environments -- the assumption being that downstream testing will catch any errors before the product reaches production. That is not the case in the world of SOA, where the eventual applications of a service could and often are beyond the control of the development group. Demonstrated adherence to the service design and interface specification is one way to reduce the impact of unexpected downstream implementations of the service.

Friday, September 12, 2008

SOA Driven Testing?

By now, I suspect that most folks who are involved with designing, writing, maintaining and/or supporting software have at least heard of the newest addition to the industry's “buzz-acronym alphabet soup." It's not XP (Extreme Programming), OO(Object Oriented) or even TDD (Test Driven Development). This time the buzz-acronym is SOA (Service Oriented Architecture). And in fashion with many of the more recent buzz-acronyms, the expanded phrase sheds little, if any, light on what the term really means. When I checked in with a few friends of mine to make sure I had my terminology straight, one of them pointed me to Martin Fowler’s Blog where he writes…
“…one question I'm bound to be asked is "what do you think of SOA (Service Oriented Architecture)?" It's a question that's pretty much impossible to answer because SOA means so many different things to different people.
• For some SOA is about exposing software through web services… • For some SOA implies an architecture where applications disappear… • For some SOA is about allowing systems to communicate over some form of standard structure… with other applications… • For some SOA is all about using (mostly) asynchronous messaging to transfer documents between different systems… I've heard people say the nice thing about SOA is that it separates data from process, that it combines data and process, that it uses web standards, that it's independent of web standards, that it's asynchronous, that it's synchronous, that the synchronicity doesn't matter....
I was at Microsoft PDC a couple of years ago. I sat through a day's worth of presentations on SOA -at the end I was on the SOA panel. I played it for laughs by asking if anyone else understood what on earth SOA was. Afterwards someone made the comment that this ambiguity was also something that happened with Object Orientation. There's some truth in that, there were (and are) some divergent views on what OO means. But there's far less Object Ambiguity than the there is Service Oriented Ambiguity…”
Service Oriented Ambiguity?!? No *WONDER* I got confused sometimes while reading the articles in CRN, SD-Times, CTO Source, TechRepublic
Testing Strategy & Management Series - SOA Driven Testing?
© PerfTestPlus, Inc. 2006
1
and others about the technologies behind SOA! This is just one more reason I’m thrilled with my choice to be a tester. Compared to figuring out all of the enabling technologies, testing SOA is a piece of cake. Don’t get me wrong, testing SOA has its challenges, but we at least have some experience with the concepts. Allow me to explain.
SOA Concept
Let's start by taking a look at what SOA is conceptually all about from a tester’s point of view – without creating a paper that is certain to win us a round of “Buzzword Bingo." Ignoring the hype around the phrase and the acronym, SOA is nothing more than the most recent step in the natural evolution of software and software development that started with the GOTO statement. I'm not being cynical; I'm serious! The dreaded GOTO statement started the evolution of abstraction of code decades ago. Some of you may remember being chastised for using the GOTO statement in line numbered BASIC and upgrading to GOSUB-RETURN to save face before becoming empowered by Functions, Procedures and eventually Java Beans or Objects. Even if your programming background doesn't include first-hand experience with that evolution, you probably recognize all of these programming concepts as methods of minimizing code redundancy and abstracting sections of code to maximize code re-use.
This concept of abstraction and code re-use (the basic concept behind what Fowler called the Object Ambiguity) is what paved the way for the software industry to think in terms of not just reusable segments of code but eventually entire mini-applications that can be used in many different contexts to provide the same, or similar, functionality. Possibly the most well known of this breed of mini-application, as I think of them, are those that process credit card purchases over the web.
I'm sure that it's no surprise to anyone reading this article that once you get beyond the service providers and resellers, there are really only a small handful of organizations that actually publish and maintain the vast majority of credit card processing software. In fact, virtually all of us with Web Sites that sell products (often referred to as B2C or Business to Consumer sites) simply “plug-in” to one of those pieces of software (for a small fee, of course) to make our once-innocent web site into an E-Commerce web site! Of course, this particular type of mini-application has its own buzz-term – it's a called a Web Service. Web Services have been around for several years and are actually the direct predecessors, or maybe the earliest adopted subset, of SOA.
For years I struggled with the question of “What’s the difference between a Service and an Object on Steroids?” It took me almost four years to navigate my way through the implementation technologies and coding patterns to figure out that the fundamental difference is that Objects are programmer-centric abstractions of code and Services are user-or business-centric abstractions of code. Basically, a programmer may write code to reference a number of objects that the user is completely unaware of while that user performs an activity, like logging into a secure web site. If, instead, the “log into a secure web site” activity were to be written as a Service, it would be a single entity that accepted certain input and responded with certain output. Not only is the user unaware of the Service, but the programmer writing the application need only be aware of the format and contents of the input and output parameters. In fact, SOA is really nothing more than building software applications in such a manner as to be able to take advantage of Services, whether they are available via the web or the next server down on the rack. Independent of all the ambiguity about technologies, protocols and degrees of abstraction, that is really all there is to SOA.
Testing Strategy & Management Series - SOA Driven Testing?
© PerfTestPlus, Inc. 2006
2
Testing SOA
That said, there are several things about SOA that are going to present challenges that many testers are not used to facing, at least not in the volumes and combinations that we will see with SOA. First, testing Services in an SOA environment is fundamentally different from testing the Objects that inspired them in at least one significant way. Objects were (and are), as we mentioned, programmer-centric segments of code that are likely to be used in more than one area of one or more applications. An object is generally tested directly via unit-tests written by the developer and indirectly by user-acceptance and black-box testers.
Services however, require a different testing approach because they encompass entire business processes and can call dozens of objects and are unlikely to have been developed or tested by anyone you will ever meet or speak to. As testers, we have little choice but to validate the service as a black-box, probably through some kind of test harness, focusing on input values, output values and data format. Sounds a lot like a unit-test, doesn't it?
A New Approach
The next challenge we testers face is that with SOA, we can no longer get away with thinking about applications exclusively from just unit and black-box perspectives. We absolutely must think about SOA applications in (at least) three logical segments: the services themselves, the user interface, and a communication or SOA interface segment (sometimes referred to as a “service broker”). Sounds easy enough, but here's the kicker: we need to test each of these segments both independently and collectively and we need to test each of these segments at both the unit-level as well as a black-box. This means more testers pairing with developers, more testers writing test harnesses, and more automation via API versus UI.
The testing challenges that SOA present that I am most excited about (yes, I am aware that makes me a geek) are the challenges related to performance testing. We as an industry already have enough trouble finding the time and/or the money to performance test the way we'd like, even when we are lucky enough to be part of an organization that thinks about performance testing at all. Now we're intentionally building applications so we can plug in code that we will likely never see that was probably written and is certainly hosted elsewhere on some machine we are unlikely to have access to, that takes our data in and magically spits out the “answer” (we'll assume it's even the “correct answer”). How, exactly, are we to trust that this magic service is going to handle our holiday peak? Even more frightening, how are we going to trust that a whole bunch of these magic services are going to all perform well together like the “well-oiled machine” we'd have built (or would like to believe we'd build) on our own?
Why am I excited about this you ask? No, not because I think these challenges are going to make me rich (though, that would be nice). I'm excited because I think that SOA is going to force the industry to bridge the gap between top-down (black-box/user-experience) performance testing and bottom-up (unit/component/object-level) performance testing that has needed to be bridged for as long as I've been involved with performance testing.
Testing Strategy & Management Series - SOA Driven Testing?
© PerfTestPlus, Inc. 2006
3
Performance Testing as it was Meant To Be
Rather than having to figure out the logical segments for decomposition and recomposition, they have already been defined for us. Rather than having to build test harnesses exclusively for performance testing that no one else has thought of, we can piggy-back on the test harnesses used by the functional and unit testers. Rather than starting from a belief that “We wrote it, therefore it will perform well," we will be starting from a position of “Someone else wrote it and we need validate their performance claims and make sure that it actually works that well with our UI/data/configuration.”
Facing the Challenge
I'm certain that some of these challenges may seem pretty, well, challenging to folks who haven't faced them before, but it’s not completely uncharted territory. Keep your eyes open for more articles, presentations and tools focused on this kind of testing. Pay particular attention to the folks who talk about how testing SOA relates to testing EAI (or EDI for that matter), Middleware and Web Services. They are the ones who have taken on similar challenges before.
Martin Fowler closed the blog I referenced earlier this way:
“…many different (and mostly incompatible) ideas fall under the SOA camp. These do need to be properly described (and named) independently of SOA. I think SOA has turned into a semantics-free concept that can join 'components' and 'architecture'…”
So while the developers, vendors, marketers and architects are sorting out the “Buzz-Acronym Soup" Martin alludes to, we testers can be digging into the concept of testing applications that are segregated cleanly enough for us to effectively apply all of our tester skills at more points in both the application and the development process. As a tester, I really can’t think of a better way to spend my time until we switch to the wave made by the next acronym that sticks!
Acknowledgments
This article was first written in support of a webinar presented by Software Quality Engineering on May 9, 2006.
About the Author
Scott Barber is the CTO of PerfTestPlus (www.PerfTestPlus.com) and Co-Founder of the Workshop on Performance and Reliability (WOPR – www.performance-workshop.org). Scott's particular specialties are testing and analyzing performance for complex systems, developing customized testing methodologies, testing embedded systems, testing biometric identification and security systems, group facilitation and authoring instructional or educational materials. In recognition of his standing as a thought leading performance tester, Scott was invited to be a monthly columnist for Software Test and Performance Magazine in addition to his regular contributions to this and other top software testing print and on-line publications, is regularly invited to participate in industry advancing professional workshops and to present at a wide variety of software development and testing venues. His presentations are well received by industry and academic conferences, college classes, local user groups and individual corporations. Scott is active in his personal mission of improving the state of
Testing Strategy & Management Series - SOA Driven Testing?
© PerfTestPlus, Inc. 2006
4
performance testing across the industry by collaborating with other industry authors, thought leaders and expert practitioners as well as volunteering his time to establish and grow industry organizations.
His tireless dedication to the advancement of software testing in general and specifically performance testing is often referred to as a hobby in addition to a job due to the enjoyment he gains from his efforts.
About PerfTestPlus
PerfTestPlus was founded on the concept of making software testing industry expertise and thought-leadership available to organizations, large and small, who want to push their testing beyond "state-ofthe-practice" to "state-of-the-art." Our founders are dedicated to delivering expert level software-testing-related services in a manner that is both ethical and cost-effective. PerfTestPlus enables individual experts to deliver expert-level services to clients who value true expertise. Rather than trying to find individuals to fit some pre-determined expertise or service offering, PerfTestPlus builds its services around the expertise of its employees. What this means to you is that when you hire an analyst, trainer, mentor or consultant through PerfTestPlus, what you get is someone who is passionate about what you have hired them to do, someone who considers that task to be their specialty, someone who is willing to stake their personal reputation on the quality of their work -not just the reputation of a distant and "faceless" company.

By: R. Scott Barber

Wednesday, September 10, 2008

Unit, integration testing first steps toward SOA quality

Unit, integration testing first steps toward SOA quality

The unit testing challenge with SOA is not so much the actual exercise of unit testing but the speed at which it has to occur.
The service-orientated architecture (SOA) paradigm endeavors to address both the time-to-market challenge faced by business and the need for IT to develop, test, and deploy ever-evolving, complex solutions. SOA breaks complex solutions into component parts (services) that present a simple interface to the application landscape while encapsulating both data and process. These services can be provided in-house, by business partners, and by commercial services.
From a testing perspective, the most significant challenges presented by SOA are the fact that SOA application landscapes are "always on," are continuously changing, are loosely coupled, and usually involve multiple providers (in-house, business partners, and commercial services). Finally the quality of the individual services does not necessarily translate into the overall quality of the business solution -- it is the quality of the whole that truly matters.
Example of an SOA application landscape
Let's look at a simple and rather "coarse" (large complex services) example of an SOA application landscape. In this case, the SOA solution addresses the need to sell digital media online. Service layers consist of a Web-enabled presentation layer, customer account service, catalogue service, cart service, digital fulfillment service, customer history service, and an accounting service that interfaces to a standard financial services database. The following figure illustrates this SOA solution.
From a unit and integration testing perspective we will focus on the "cart service" and its relationship to the presentation layer and immediate service partners. We will later extend this model to address a single business event (customer purchase) in a follow-on article on SOA functional and regression testing. The following figure illustrates the relationship of the cart service to its immediate service partners.
The cart service is loosely coupled to the catalogue, customer history, and digital fulfillment services with a tight coupling to the Web-enabled presentation layer.
SOA -- Unit & integration testing
SOA promotes reuse at the service level rather than at the code/objects level. If you think of each component truly as a service, then there are internal aspects (data and process) and external facing (interface) aspects of the service that need to be tested.
It is convenient to think of the internal aspects of the service in terms of unit testing and to think of testing interface relationships with immediate service partners in terms of integration. It should be noted that unit and integration testing is often ignored or given minimal attention in traditional development environments -- the assumption being that downstream testing will catch any errors before the product reaches production. That is not the case in the world of SOA, where the eventual applications of a service could and often are beyond the control of the development group. Demonstrated adherence to the service design and interface specification is one way to reduce the impact of unexpected downstream implementations of the service.
Unit testing: The process of testing the individual servicesUnit testing of a service should be performed by the developer and should be reviewed by development peers. At a minimum unit testing should consist of both verification of the unit (service) design and validation of the unit (service) implementation.
The purpose of unit testing is to discover and address discrepancies between the specification and implementation. This is especially important when implementing SOA because services are often developed in parallel and undergo continuous development/deployment. That means adherence to design specifications and the ability to effectively encapsulate a service is critical to meeting the contractual obligations of the service. The developer or development organization will have to create stubs to test the interfaces the service supports. This becomes critical once development moves towards integration testing.
The unit testing challenge with SOA is not so much the actual exercise of unit testing but the speed at which it has to occur. There are several well-documented approaches to development and unit testing that will address this challenge, but one that is not often mentioned is the application of Agile development techniques in combination with instrumentation of the code to support continuous integration. Using methodologies and technologies that enable this approach helps address the challenges of continuous development/deployment that are a landmark characteristics of the SOA application landscape.
One other aspect of the SOA application landscape that will impact how unit testing is approached is the extent to which in-house developed and deployed services are employed. The more "in-house" the SOA solution is, the more it can be treated as a component-based solution. Basically that means the pace and extent of any changes are within the control of the development organization.
Integration testing: The process of testing integrated servicesIntegration testing of a service should be performed by the QA/testing team and reviewed by both the architecture lead and development leads. At a minimum integration testing should consist of both verification of the integrated architectural design as it relates to the service under test and validation of integrated services. For each service this would consist of testing the functionality of the service and its relationship with all immediate (directly connected) services.
In our example, integration testing of the cart service would involve testing the cart service functionality and the integration of that service to the catalogue service, customer history service, digital fulfillment service, and the Web-enabled presentation layer. The purpose is to discover and address discrepancies between the functional specification and implementation of the cart service and its contractual responsibilities with other (immediate) services. Once again, this is especially important when implementing SOA.
The integration testing effort should focus on the service undergoing integration testing and its contractual responsibilities with other (immediate) services. There are several reasons for taking this approach, not the least of which is that integration testing of SOA solutions is extremely challenging -- the wider the scope of the integration testing effort, the more challenging it becomes. It useful to focus on the immediate landscape to ensure the contractual obligations are being met by each service and then extend the scope of testing during functional testing. The basic premise is to treat the services as building blocks that compose/support a particular business event or part of an event.
There are several automated SOA testing tools available (commercial and shareware) that help address the testing of services, and there are more traditional testing tools that can be tooled to address SOA testing. Many are able to capture service descriptions and create initial tests based on these descriptions. Those tests can then be automated.
Once you've completed integration testing of closely related services, you can begin true functional testing. This is where the real challenges of testing SOA solutions come to bear and involve the following:
· Third-party services
· Late binding (selection of service)
· Missing/incomplete/changing services
· Multi-platform/Multi-language distributed services

By David W. Johnson

Unit, integration testing first steps toward SOA quality

The unit testing challenge with SOA is not so much the actual exercise of unit testing but the speed at which it has to occur.
The service-orientated architecture (SOA) paradigm endeavors to address both the time-to-market challenge faced by business and the need for IT to develop, test, and deploy ever-evolving, complex solutions. SOA breaks complex solutions into component parts (services) that present a simple interface to the application landscape while encapsulating both data and process. These services can be provided in-house, by business partners, and by commercial services.
From a testing perspective, the most significant challenges presented by SOA are the fact that SOA application landscapes are "always on," are continuously changing, are loosely coupled, and usually involve multiple providers (in-house, business partners, and commercial services). Finally the quality of the individual services does not necessarily translate into the overall quality of the business solution -- it is the quality of the whole that truly matters.
Example of an SOA application landscape
Let's look at a simple and rather "coarse" (large complex services) example of an SOA application landscape. In this case, the SOA solution addresses the need to sell digital media online. Service layers consist of a Web-enabled presentation layer, customer account service, catalogue service, cart service, digital fulfillment service, customer history service, and an accounting service that interfaces to a standard financial services database. The following figure illustrates this SOA solution.
From a unit and integration testing perspective we will focus on the "cart service" and its relationship to the presentation layer and immediate service partners. We will later extend this model to address a single business event (customer purchase) in a follow-on article on SOA functional and regression testing. The following figure illustrates the relationship of the cart service to its immediate service partners.
The cart service is loosely coupled to the catalogue, customer history, and digital fulfillment services with a tight coupling to the Web-enabled presentation layer.
SOA -- Unit & integration testing
SOA promotes reuse at the service level rather than at the code/objects level. If you think of each component truly as a service, then there are internal aspects (data and process) and external facing (interface) aspects of the service that need to be tested.
It is convenient to think of the internal aspects of the service in terms of unit testing and to think of testing interface relationships with immediate service partners in terms of integration. It should be noted that unit and integration testing is often ignored or given minimal attention in traditional development environments -- the assumption being that downstream testing will catch any errors before the product reaches production. That is not the case in the world of SOA, where the eventual applications of a service could and often are beyond the control of the development group. Demonstrated adherence to the service design and interface specification is one way to reduce the impact of unexpected downstream implementations of the service.
Unit testing: The process of testing the individual servicesUnit testing of a service should be performed by the developer and should be reviewed by development peers. At a minimum unit testing should consist of both verification of the unit (service) design and validation of the unit (service) implementation.
The purpose of unit testing is to discover and address discrepancies between the specification and implementation. This is especially important when implementing SOA because services are often developed in parallel and undergo continuous development/deployment. That means adherence to design specifications and the ability to effectively encapsulate a service is critical to meeting the contractual obligations of the service. The developer or development organization will have to create stubs to test the interfaces the service supports. This becomes critical once development moves towards integration testing.
The unit testing challenge with SOA is not so much the actual exercise of unit testing but the speed at which it has to occur. There are several well-documented approaches to development and unit testing that will address this challenge, but one that is not often mentioned is the application of Agile development techniques in combination with instrumentation of the code to support continuous integration. Using methodologies and technologies that enable this approach helps address the challenges of continuous development/deployment that are a landmark characteristics of the SOA application landscape.
One other aspect of the SOA application landscape that will impact how unit testing is approached is the extent to which in-house developed and deployed services are employed. The more "in-house" the SOA solution is, the more it can be treated as a component-based solution. Basically that means the pace and extent of any changes are within the control of the development organization.
Integration testing: The process of testing integrated servicesIntegration testing of a service should be performed by the QA/testing team and reviewed by both the architecture lead and development leads. At a minimum integration testing should consist of both verification of the integrated architectural design as it relates to the service under test and validation of integrated services. For each service this would consist of testing the functionality of the service and its relationship with all immediate (directly connected) services.
In our example, integration testing of the cart service would involve testing the cart service functionality and the integration of that service to the catalogue service, customer history service, digital fulfillment service, and the Web-enabled presentation layer. The purpose is to discover and address discrepancies between the functional specification and implementation of the cart service and its contractual responsibilities with other (immediate) services. Once again, this is especially important when implementing SOA.
The integration testing effort should focus on the service undergoing integration testing and its contractual responsibilities with other (immediate) services. There are several reasons for taking this approach, not the least of which is that integration testing of SOA solutions is extremely challenging -- the wider the scope of the integration testing effort, the more challenging it becomes. It useful to focus on the immediate landscape to ensure the contractual obligations are being met by each service and then extend the scope of testing during functional testing. The basic premise is to treat the services as building blocks that compose/support a particular business event or part of an event.
There are several automated SOA testing tools available (commercial and shareware) that help address the testing of services, and there are more traditional testing tools that can be tooled to address SOA testing. Many are able to capture service descriptions and create initial tests based on these descriptions. Those tests can then be automated.
Once you've completed integration testing of closely related services, you can begin true functional testing. This is where the real challenges of testing SOA solutions come to bear and involve the following:
· Third-party services
· Late binding (selection of service)
· Missing/incomplete/changing services
· Multi-platform/Multi-language distributed services

By David W. Johnson

Thursday, September 4, 2008

Software testing deliverables: From test plans to status reports

There are core sets of test deliverables that are required for any software testing phase: test plan, test case, defect documentation and status report. When taken together this set of deliverables takes the testing team from planning to testing and on through defect remediation and status reporting. This does not represent a definitive set of test deliverables, but it will help any test organization begin the process of determining an appropriate set of deliverables.
One common misconception is that these must be presented as a set of documents, but there are toolsets and applications available that capture the content and intent of these deliverables without creating a document or set of documents. The goal is to capture the required content in a useful and consistent framework as concisely as possible.
Test plan :At a minimum the test plan presents the test: objectives, scope, approach, assumptions, dependencies, risks and schedule for the appropriate test phase or phases. Many test organizations will use the test plan to describe the software testing phases, testing techniques, testing methods and other general aspects of any testing effort. General information around the practice of testing should be kept in a "Best Practices" repository -- testing standards. This avoids redundant and conflicting information from being presented to the reader and keeps the test plan focused on the task at hand –- planning the testing effort. (See "The role of a software test manager".)
Objectives -- mission statementThe objective of the current testing effort needs to be clearly stated and understood by the software testing team and any other organization involved in the deployment. This should not be a sweeping statement on testing the "whole application" -- unless that is actually the goal. Instead the primary testing objectives should relate to the purpose of the current release. If this were a point-of-sale system and the purpose of the current release was to provide enhanced online reporting functionality, then the objective/mission statement could be this:
"To ensure the enhanced online reporting functionality performs to specification and to verify any existing functionality deemed to be in scope."
The test objective describes the "why" of the testing effort. The details of the "what" will be described in the scope portion of the test plan. Once again, any general testing objectives should be documented in the "Best Practices" repository. General or common objectives for any testing effort could include expanding the test case regression suite, documenting new requirements, automating test cases, and updating existing test cases.
In scope:The components of the system to be tested (hardware, software, middleware, etc.) need to be clearly defined as being "in scope." This can take the form of an itemized list of those "in scope": requirements, functional areas, systems, business functions or any aspect of the system that clearly delineates the scope to the testing organization and any other organization involved in the deployment. The "What is to be tested?" question should be answered by the in scope portion of the test plan -- the aspects of the system that will be covered by the current testing effort.
Out of scopeThe components of the system that will not be tested also need to be clearly defined as being "out of scope." This does not mean that these system components will not be executed or exercised; it just means that test cases will not be included that specifically test these system components. The "What is NOT to be tested?" question should be answered by the out of scope portion of the test plan. Often neglected, this part of the test plan begins to deal with the risk-based scheduling that all test organizations must address -- What parts of the system can I afford not to test? The testing approach section of the test plan should address that question.
ApproachThis section defines the testing activities that will be applied against the application for the current testing phase. This addresses how testing will be accomplished against the in scope aspects of the system and any mitigating factors that may reduce the risk of leaving aspects of the system out of scope.
The approach should be viewed as a to-do list that will be fully detailed in the test schedule. The approach should clearly state which aspects of the system are to be tested and how: backup and recovery testing, compatibility/conversion testing, destructive testing, environment testing, interface testing, parallel testing, procedural resting, regression testing, application security testing, storage testing, stress and performance testing, and any other testing approach that is applicable to the current testing effort. The reasoning for using any given set of approaches should be described, usually from the perspective of risk.
In scopeAssumptions are facts, statements and/or expectations of other teams that the test team believes to be true. Assumptions specific to each testing phase should be documented. These are the assumptions upon which the test approach was based. Listed assumptions are also risks should they be incorrect. If any of the assumptions prove not to be true, there may be a negative impact on the testing activities. In any environment there is a common set of assumptions that apply to any given release. These common assumptions should be documented in the "Best Practices" repository; only assumptions unique to the current testing effort and perhaps those common assumptions critical to the current situation should be documented.
DependenciesDependencies are events or milestones that must be completed in order to proceed within any given testing activity. These are the dependencies that will be presented in the test schedule. In this section the events or milestones that are deemed critical to the testing effort should be listed and any potential impact or risks to the testing schedule itemized.
RisksRisks are factors that could negatively impact the testing effort. An itemized list of risks should be drawn up and their potential impact on the testing effort described. Risks that have been itemized in the project plan need not be repeated here unless the impact to the testing effort has not already been clearly stated.
Schedule The test schedule defines when and by whom testing activities will be performed. The information gathered for the body of the test plan is used here in combination with the available resource pool to determine the test schedule. Experience from previous testing efforts along with a detailed understanding of the current testing goals will help make the test schedule as accurate as possible. There are several planning and scheduling tools available that make the plan easier to construct and maintain.
Test caseTest cases are the formal implementation of a test case design. The goal of any given test case or set of test cases is to detect defects in the system being tested. A test case should be documented in a manner that is useful for the current test cycle and any future test cycles. At a bare minimum, each test case should contain the author, name, description, step, expected results and status.
Test case nameThe name or title should contain the essence of the test case, including the functional area and purpose of the test. Using a common naming convention that groups test cases encourages reuse and helps prevents duplicate test cases from occurring.
Test case descriptionThe description should clearly state the sequence of business events to be exercised by the test case. The test case description can apply to one or more test cases; it will often take more than one test case to fully test an area of the application.
Test case stepEach test case step should clearly state the navigation, data and events required to accomplish the step. Using a common descriptive approach encourages conformity and reuse. Keywords offer one of the most effective approaches to test case design and can be applied to both manual and automated test cases.
Expected resultsThe expected results are the expected behavior of the system after any test case step that requires verification or validation. This could include screen pop-ups, data updates, display changes or any other discernable event or transaction on the system that is expected to occur when the test case step is executed.
StatusThis is the operational status of the test case. Is it ready to be executed?
Documenting defectsThe primary purpose of testing is to detect defects in the application before it is released into production. Furthermore, defects are arguably the only product the testing team produces that is seen by the project team. Document defects in a manner that is useful in the defect remediation process. At a bare minimum, each defect should contain the author, name, description, severity, impacted area and status.
Defect nameThe name or title should contain the essence of the defect, including the functional area and nature of the defect.
Defect descriptionThe description should clearly state what sequence of events leads to the defect. When possible include a screenshot or printout of the error.
How to replicateThe defect description should provide sufficient detail for the triage team and the developer fixing the defect to duplicate the defect.
Defect severityThe severity assigned to a defect is dependent on the phase of testing, impact of the defect on the testing effort, and the risk the defect would present to the business if the defect was rolled-out into production.
Impacted areaThe Impacted area can be referenced by functional component or functional area of the system. Often both are used.
Status reportA test organization and members of the testing team will be called upon to create status reports on a daily, weekly, monthly and project basis. The content of any status report should remain focused on the testing objective, scope and scheduled milestones currently being addressed. It is useful to state each of these at the beginning of each status report and then publish the achievements or goals accomplished during the current reporting period, as well as those that will be accomplished during the next reporting period.
Any known risks that will directly impact the testing effort need to be itemized here, especially any "showstoppers" that will prevent any further testing of one or more aspects of the system.
Reporting periodThis is the period covered in the current status report. Include references to any previous status reports that should be reviewed.
Mission statementThe objective of the current testing effort needs to be clearly stated and understood by the testing team and any other organization involved in the deployment.
Current scopeThe components of the system being tested (hardware, software, middleware, etc.) need to be clearly defined as being "in scope," and any related components that are not being tested need to be clearly itemized as "out of scope."
Schedule milestonesAny schedule milestones being worked on during the current reporting period need to be listed and their current status clearly stated. Milestones that were scheduled but not addressed during the current reporting period need to be raised as risks.
RisksRisks are factors that could negatively impact the current testing effort. An itemized list of risks that are currently impacting the testing effort should be drawn up and their impact on the testing effort described.

David W. Johnson

David W. Johnson