Sponsered Links

Friday, November 14, 2008

Checklist for Website Testing

Worldwide web is browsed by customers with different knowledge levels, while testing websites (static/Dynamic) QA department should concentrate on various aspects to make effective presentation of website in www.
Aspects to cover :
Functionality
Usability
User Interface
Serverside Interface
Compatibility
Security
Performance

Description :Functionality
1.1 Links
Objective is to check for all the links in the website

1.1.1 1.1.1All Hyperlinks
1.1.2 1.1.2All Internal links
1.1.3 1.1.3 All External links
1.1.4 1.1.4 All Mail links
1.1.5 1.1.5Check for orphan pages
1.1.6 1.1.6 Check for Broken links
1.2Forms
Check for the integrity of submission of all forms

1.1.7 1.2.1All Field level Checks
1.1.8 1.2.2 All Field level Validations
1.1.9 1.2.3Functionality of create,modify,delete and view
1.1.1 1.2.4Handling of wrong inputs
1.1.1 1.2.5Default values if any (standard)
1.1.1 1.2.6Optional Versus Mandatory fields
1.3 Cookies
1.3.1Check for the cookies that has to be enabled and how it has to be expired
1.4 Web Indexing
Depending on how the site is designed using metatags, frames, HTMLsyntax, dynamically created pages, passwords or different languages, our site will be searchable in different ways

1.4.1 1.4.1Meta Tags
1.4.2 1.4.2Frames
1.4.3 1.4.3HTML syntax
1.5 Database
Two types of errors that may occur in web application

a) 1.5.1 Data Integrity : Missing or wrong data in table
b) 1.5.2Output Errors : Errors In writing ,editing or reading operation in the table

Monday, November 10, 2008

Testing Web services and RIAs

Testing Web services and RIAs

Given the number of components that go into a complete Web service, it is not surprising that complete testing is difficult. Just consider how many different technologies may be involved in even a simple client and server. An Ajax style rich Internet application (RIA), for example, combines JavaScript, CSS and HTML markup on the client alone. Add in the Internet connection, plus the service side application code and the database it works with and you have many points of potential conflict, bugs and performance problems.

General testing principles

Let's review some of the general principles of testing and debugging, which are applicable no matter which toolkits or languages you use:
Design with testing in mind. A proper choice of abstraction for inputs and outputs will allow testing without the added complexity of a network or a HTTP server.
Test separate parts wherever possible. Some developers favor the "unit testing" philosophy to enforce testing at a low level. In any case, use language features such as assertions in Java to catch and identify bad inputs to functions.
Document as you go, use explanatory names for interfaces, classes, methods and variables. One of the advantages claimed for unit testing is that it forces good documentation.

Testing browser clients - Firefox

We have come a long way from the days when a browser's "view source" command was the only way to inspect a Web page. These days a typical RIA uses HTML markup, CSS driven formatting and JavaScript with a Web service providing data in JSON (JavaScript Object Notation) or XML. Fortunately developer support tool providers are working to keep up.
The most advanced support tools today are found in the free open-source Mozilla Firefox browser, a program that belongs on every serious Web developer's hard disk. I just installed Firefox 3.0 and was delighted to find that the "Live HTTP Headers" tool is now part of the standard download. This tool can capture and display the exact request and response headers for all HTTP requests that go to make up a modern Web page. Inspecting these headers to ensure that the parts of your RIA are being requested correctly is a really good idea.

There are a number of Firefox add-ons for dealing with JavaScript, which is a good thing since JavaScript is an essential part of typical RIAs. I installed the "Firebug" add-on, which has been updated for Firefox 3.0. The interface provides for inspecting and editing HTML, CSS, and JavaScript. The inspector for HTML is based on the document object model (DOM), so complex pages can be examined.

With Firebug network monitoring enabled, you can capture the request and response headers and see the response size and amount of time each request required. This feature is great for seeing the causes of your application's perceived response speed. You may be surprised to find that a single request resulting in an error is holding up the entire application.
The Firebug JavaScript inspector can monitor the execution time of all JavaScript functions. You can modify JavaScript objects and insert breakpoints in JavaScript methods as well as change variable values, insert new code or execute JavaScript directly. By executing the JavaScript that requests data from your Web service, you can test with a wide range of inputs without having to create separate HTML pages. In general, Firebug is designed to assist developers with Ajax based applications, using both XML and JSON.

By William Brogden

Wednesday, October 29, 2008

User acceptance testing and test cases

Q- What's the purpose of acceptance testing? Can we use the same test cases of system testing for acceptance testing?

A- As with most questions folks ask about items related to software testing, the answer starts with "It depends…" In this case, it depends on what the client or team means when they refer to acceptance testing (and possibly how the team defines and implements test cases). Rather than dig too far into all of the variability in the use of these terms I've seen over the course of my career, I'm going to state some assumptions.

Let's assume that we're talking about "user acceptance testing." Acceptance testing could relate to anyone whose approval is required prior to launching an application, but user acceptance testing is by far the most common. Further, let's assume Cem Kaner's definition of a test case -- "a test case is a question that you ask of the program. The point of running the test is to gain information, for example whether the program will pass or fail the test." This allows us to focus on the point of having test cases rather than focusing on how they are documented.

With that out of the way, let's take a look at the first part of the question – "What is the purpose of acceptance testing?" Simply speaking, the purpose is to give the end-user a chance to give the development team feed back as to whether or not the software meets their needs. Ultimately, it's the user that needs to be satisfied with the application, not the testers, managers or contract writers. Personally, I think user acceptance testing is one of the most important types of testing we can conduct on a product. The way I see it, I care a whole lot more about whether users are happy about the way a program works than whether or not the program passes a bunch of tests that were created by testers in an attempt to validate the requirements that an analyst did their best to capture and a programmer interpreted based on their understanding of those requirements.
Which leads us to the second part of the question – "Can we use the same test cases for system testing and acceptance testing." I've seen a lot of projects where there is a tester put in charge of developing user acceptance tests. They start with test cases. They then write detailed scripts, including test data, for users to follow through the application and ask those users to check boxes on those scripts as pass or fail for the test cases the tester decided to include. Sometimes at the end of the script the tester leaves a section for free responses from the end-users, but in my experience, not very often.

Now, that process has never made any sense to me. If the point of user acceptance testing is to find out if the user is happy with the software, what sense does it make for some tester to tell them what to look for? Why not just ask the users to try the software and tell the team what they think? The only answer to that question I've ever gotten is "It's too hard to figure out if they are actually happy, so we're just trying to figure out if we gave them what they asked for according to the requirements document that they signed off on." Which boils down to "we're just getting the users to agree that we should get paid." So, if that is your goal, go ahead and use system test cases. But if your goal is to determine user satisfaction, just let them use the system and tell you what they like and what they don't like about the system. I'm willing to bet you'll end up with a better application that way.

BY-Scott Barber

Monday, October 20, 2008

Six functional tests to ensure software quality

Six types of functional testing can be used to ensure the quality of the end product. Understand these testing types and scale the execution to match the risk to the project.

1. Ensure every line of code executes properly with Unit Testing.
Unit testing is the process of testing each unit of code in a single component. This form of testing is carried out by the developer as the component is being developed. The developer is responsible for ensuring that each detail of the implementation is logically correct. Unit tests are normally discussed in terms of the type of coverage they provide:
Function coverage: Each function/method executed by at least one test case.
Statement coverage: Each line of code covered by at least one test case (need more test cases than above).
Path coverage: Every possible path through code covered by at least one test case (need plenty of test cases).
2.Ensure every function produces its expected outcome with Functional Testing.

Functional testing addresses concerns surrounding the correct implementation of functional requirements. Commonly referred to as black box testing, this type of testing requires no knowledge of the underlying implementation.
Functional test suites are created from requirement use cases, with each scenario becoming a functional test. As a component is implemented, the respective functional test is applied to it after it has been unit tested.
For many projects, it is unreasonable to test every functional aspect of the software. Instead, define functional testing goals that are appropriate for the project. Prioritize critical and widely used functions and include other functions as time and resources permit.
For detailed information on how to correctly develop use cases to support functional testing, refer to the Info-Tech Advisor research note, "Use Cases: Steer Clear of the Pitfalls."
3.Ensure all functions combine to deliver the desired business result with System
Testing.

System testing executes end-to-end functional tests that cross software units, helping to realize the goal of ensuring that components combine to deliver the desired business result. In defining the project's system testing goals, focus on those scenarios that require critical units to integrate.
Also, consider whether all subsystems should be tested first or if all layers of a single subsystem should be tested before being combined with another subsystem.
Combining the various components together in one swift move should be avoided. The issue with this approach is the difficulty in localizing error. Components should be integrated incrementally after each has been tested in isolation.
4.Ensure new changes did not adversely affect other parts of the system with Regression Testing.

Regression testing ensures code modifications have not inadvertently introduced bugs into the system or changed existing functionality. Goals for regression testing should include plans from the original unit, as well as functional and system tests phases to demonstrate that existing functionality behaves as intended.
Determining when regression testing is sufficient can be difficult. Although it is not desirable to test the entire system again, critical functionality should be tested regardless of where the modification occurred. Regression testing should be done frequently to ensure a baseline software quality is maintained.
5.Ensure the system integrates with and does not adversely affect other enterprise systems with System Integration Testing.

System integration testing is a process that assesses the software's interoperability and cooperation with other applications. Define testing goals that will exercise required communication. (It is fruitless to test interaction between systems that will not collaborate once the developed system is installed.) This is done using process flows that encapsulate the entire system.
The need for a developed system to coexist with existing enterprise applications necessitates developing testing goals that can uncover faults in their integration. In the case that the new system is standalone software and there is no requirement for compatibility with any other enterprise system, system integration testing can be ignored.
6.Ensure the customer is satisfied with the system with Acceptance Testing.
Acceptance testing aims to test how well users interact with the system, that it does what they expect and is easy to use. Although it is the final phase of testing before software deployment, the tests themselves should be defined as early as possible in the SDLC. Early definition ensures customer expectations are set appropriately and confirms for designers that what they are building will satisfy the end user's requirements. To that end, acceptance test cases are developed from user requirements and are validated in conjunction with actual end users of the system. The process results in acceptance or rejection of the final product.
BY : Sunil Tadwalkar(PMP,GLG Educator)

Monday, October 13, 2008

Managing a software testing team while cultivating talent

Q- How can I manage a test team effectively and enhance my team's testing skills at the same time?

A- The size of your team and the experience level of each person on your team are two considerable influences in how you manage your team. Without knowing either of these factors or the environment you're working in, let me offer several ideas.
I'd begin with each person individually. I often build a custom learning plan for people I hire (or inherit). I ask each person to help clarify what they know in several areas such as: database models, SQL, test automation, types of testing such as functional, performance, and installation, the subject domain we're working in whether it's banking software, contact management or another field. I work with each person individually as much as I feasibly can and help each person grow their knowledge in these areas -- or other areas that maybe more applicable based on their background and the environment we're working in. Together, we'll look for project work where they can apply knowledge as soon as possible. Let me back up and add that ongoing knowledge and the pursuit of learning isn't a surprise for anyone since it's a factor in my hiring and a spirit I look for in people.
In terms of building a team's testing skills there are more options. If more than one person is trying to acquire the same or similar knowledge you can establish a buddy system between the two people. An effective pairing will often be two experienced people trying to expand in a new area as opposed to two entry level people who might both be struggling in many areas. Paired testing sessions are an option with a more senior tester working with a less experienced tester.
We can learn from every person we meet. If you build a learning list together with the team, you should look to different people on your team to lead. The point is the team is part of building the list. Perhaps your lead automation tester can lead brown bag lunches or offer learning sessions where manual testers listen in on the automation planning sessions. Unless you have a team of completely inexperienced testers, you should not have to lead all the knowledge exchange sessions but you will have to start the exchange and provide an environment (time, space and attitude) where knowledge and skills are shared. I've hosted internal book clubs where we read testing books together and then talk about books but I've found more immediate project work with relevant small bits to read more effective. Experiment with your team since each team has its own unique dynamic.
In terms of management, your knowledge exchange program provides leadership opportunities for people. Beyond project work, you'll be able to see how your team members work together or perhaps, don't work together. I'd be looking for energy levels, willingness and commitment to learning. The sessions you host will give you another opportunity to observe the group and each individual.
Since I don't think any of us are ever done learning, you can also demonstrate to the team what you're learning and how you go about pursuing more skills or background. Someone on your team might have more experience in an area and this could be a great way for you to learn and someone else to teach. Knowledge exchange is about exchanging and I think if you hold the title of manager or lead, but demonstrate that you're still learning and you're open to someone else teaching you, then you're fostering a true
exchange.
By Karen N. Johnson

Wednesday, October 8, 2008

The benefits of user acceptance testing

Q- So we've been involved in system testing of different applications and have acquired good knowledge of each of the applications. Now we would like move into the user acceptance testing area. UAT was traditionally being done by BSAs -- short test cycle once system testing is over and most often UAT test cases are derived from system test cases themselves. One question given to us is about what difference or value we will bring on board in terms of test cases or coverage in UAT. Can you help me on this?

Answer- If I understand your situation clearly, you and your team know several applications well and have been testing the applications. Now you'll be directing user acceptance testing as well and need to explain what benefit you and your team can provide to UAT.
Let me share details of one of my experiences and then answer your questions more directly. I was in a similar situation once and worked directly with users through UAT. I was able to teach the users more about the application. Once the users were able to see more intricacies in the application, they became more skilled testers themselves and appreciated the testing team (as well as the developers) even more than they previously did. I gave them ideas and in turn I learned what quirks of the application irritated the users. I learned more about their perspective too.
I think there are multiple benefits, such as the ones I highlighted, as well as a few more. You'll get to know the users; they'll get to know you and your team. The users might be more inclined to share ideas that can add to your testing. You may also better understand what the users need to accomplish and become a better advocate for the users. I believe spending time with users of the products is beneficial.

Tuesday, October 7, 2008

User acceptance testing

Q- I have been now put into acceptance testing of a project. Before this I was into integration testing. Now that team is challenging me to find the maximum number of bugs to prove myself because of some conflicts between us. I really agree with your view towards acceptance testing. But we have been asked to design use cases and tests. I would like to face this so could you support me with your valuable tips to make this successful?

A- This is a sadly frequent situation. In my opinion, a good test script for user acceptance testing is similar to the following:

"I'm going to give you a brief demonstration of how this application works. Then I will provide you with a user's manual and some sample data (such as a list of products that have been previously entered into the system) and I'd like you go use the application to complete the tasks that you would use an application such as this for. As you work, please provide your feedback on this form."
The problem, however, is that this kind of feedback is not what most managers and stakeholders are looking for when they ask for user acceptance testing to be conducted. What they tend to be looking for is the answer to the question:
"Do the users of the system agree that we have met the requirements we were given?"
In an ideal world, high user satisfaction would map directly to successfully implemented requirements. Unfortunately, this is not often the case. Which leaves us with the dilemma of trying to balance the needs of the managers and stakeholders with one of the core ethical principles related to software testing as spelled out the ACM code of ethics, section 2.5 below (you can see the entire code of ethics reprinted on the at Association for Software Testing Web site)
"Give comprehensive and thorough evaluations of computer systems and their impacts, including analysis of possible risks. Computer professionals must strive to be perceptive, thorough, and objective when evaluating, recommending, and presenting system descriptions and alternatives. Computer professionals are in a position of special trust, and therefore have a special responsibility to provide objective, credible evaluations to employers, clients, users, and the public."
So what the question really boils down to is:
"How do I design user acceptance tests that both satisfy the needs of management to determine if the end users agree that the requirements have been met while also satisfying my obligation to capture the information I need to provide a comprehensive and thorough evaluation of the user's overall satisfaction with the product?"
Luckily, I believe the answer is easier than the question. In my experience, if you simply compliment highly structured, step-by-step user acceptance scripts, containing specific pass/fail criteria as derived from the system requirements with both the time and a mechanism for providing non-requirement-specific feedback, users will provide you with answers to both of the questions of interest. All this involves on your part is to encourage the users to, in addition to, executing the user acceptance tests that you provide, to use the system as they normally would and to provide freeform feedback in the space you provide in the script about their satisfaction with the application as it stands today. In this way, you will collect the pass/fail information that it sounds like your managers and stakeholders are asking you for, but also the information you need to be the user's advocate for changes or enhancements to the system that have resulted from unknown, overlooked, or poorly implemented requirements.

Monday, September 29, 2008

Performance testing in the production environment

Question - Could you please tell me how to measure the performance test environment to the production environment? I mean scaling performance environment to the production environment? What factors should we have in mind for the performance environment?

Ans: It can be both inaccurate and dangerous to compare performance results obtained in the test environment to the production environment. The two most likely differences in the environments are system architecture and volume of data. Other differences might include: class of machines (app and Web servers), load balancers, report servers, and network configurations. This is why making a comparison can be inaccurate. It can be dangerous if production planning is made from performance transaction timings obtained in an environment that might be very different

Review the system diagram for both environments and see if there are additional differences you can identify. Be clear to communicate these differences if anyone suggests using results from the test environment to imply the performance timing results would be the same in production.

I advocate testing in production whenever possible. In order to execute performance tests in production, I've typically worked in the middle of the night -- from 2am to 5am, for example -- while a production outage is taken. I've worked middle of the night on holiday weekends in order to gain test time in production as well. If you can't execute tests in production and you are left to execute performance tests in the test environment, then I recommend learning the performance behavior from your test environment and then communicating test results in terms of performance characteristics versus transaction timings.

Performance characteristics might include knowledge of CPU usage or performance degradations. For instance, you might be able to discover that the report performance exceeds the acceptable range defined when generating a report with some specified amount of data (such as 2 months of accounting numbers). Or you might learn search performance begins to degrade when X number of users are logged into the system and X numbers of users are executing searches at the same time. You can look for high level information and learn overall performance characteristics that can be helpful but won't provide performance timings that should be used to presume production will behave in the same way.
BY Performance characteristics might include knowledge of CPU usage or performance degradations. For instance, you might be able to discover that the report performance exceeds the acceptable range defined when generating a report with some specified amount of data (such as 2 months of accounting numbers). Or you might learn search performance begins to degrade when X number of users are logged into the system and X numbers of users are executing searches at the same time. You can look for high level information and learn overall performance characteristics that can be helpful but won't provide performance timings that should be used to presume production will behave in the same way.

BY : Karen N. Johnson

Wednesday, September 24, 2008

Testing Mobile Phone Applications......continued..

Find Ways to Understand and Simplify Problems
I have found diagnostic client software and diagnostic Web servers particularly useful for discovering and debugging issues with transcoders. Both the client and the server are designed to report the information that is sent and received. Find out whether the content is expected to be transcoded, and if so, how. If not, the data sent by one end should be received unchanged at the destination and vice-versa. The diagnostic software recorded all the data and made problems easier to detect.
Use Complimentary Tools
Find complimentary ways to test using Web browsers for Web-based mobile sites. Firefox has numerous free plug-ins that emulate a phone’s Web browser and make manual testing easier. I use the following: WMLBrowser, Web Developer, User Agent Switcher, and Modify Headers.
Reduce the Number of Combinations
As there are thousands of permutations of phones and carriers, pick an exemplary subset of phones to test with. For instance, when testing Java software (written in Java 2 Micro Edition), I test on classes of phones that include: Nokia Series 60 second and third editions; Sony Ericsson’s Java Platform 6, 7, and 8 phones; and BlackBerry models based on the keyboard layout and operating system version. Pick popular phones and phones with large and small screens and a variety of keyboards, including: T9 (where the alphabet is split across the numeric keys 2 to 9), QWERY, and other unusual keyboard layouts. Over time you may collect "interesting" phones that help expose application flaws. For example, one of my phone's core software has been highly customized by the carrier and has exposed limitations in applications that appear very quickly. By finding and reporting these issues early, the developers were able to revise their application software so it was much more flexible and robust.

Here's a site that details another way to classify your phones based on the operating system and UI: Using a Device Hierarchy.
By Julian Harty

Testing Mobile Phone Applications

Summary:
It took eighteen months for Julian Harty to overcome the various challenges of testing mobile wireless applications. In turn, he has learned some valuable lessons that he wants to share with you in this week's column.

Eighteen months ago, I started learning about the joys and challenges of testing mobile wireless applications. This article is dedicated to the various tips and tricks I've collected along the way that may help you become productive much more quickly.

Reduce Setup Time ;
Find ways to reduce the time required to configure the phone, install the software, and learn about the underlying connectivity. For example:
Your carrier or handset manufacturer may enable you to download the Internet settings to your phone rather than trying to discover and then manually key in the obscure settings.
Often the software needs to be installed from a Web site. Use text messages to send long Web addresses. Keying a URL can take several minutes and one false move may mean starting again!
Learn how to use a computer to install the software. Many manufacturers provide free software that will enable you to add and remove software applications relatively painlessly from a computer.
Figure Out Connectivity
Mobile connectivity remains a challenge. But remember, a connection relies on at least four elements:
1. Configuration of the phone
2. The service provided by the carrier (and paid for by the user)
3. The connectivity between the carrier’s wireless network and the Internet (where gateways can filter, modify, convert, or even block communications for various reasons)
4. And the rest of the connection to the Web/application server, which may include more gateways, firewalls, etc.
Understand Your Data Plan:
Carriers may offer a range of data services, from very limited access to a small list of approved Web sites (called a walled garden in the industry) to full "Internet" access that may even allow Voice over IP, video streaming, etc. Some carriers provide clear information on which services are available for each price plan; for others, you may have to research what services and Web addresses work reliably. Check how much you pay for data before embarking on data-intensive applications. I had a monthly data bill that was more than $300—even though I didn't use any of the installed applications on my phone during that time. However, one of the applications polled its server in the background while I was abroad. At $16/MB transferred, it was an expensive lesson to learn!
To be continued........

Tuesday, September 23, 2008

Software testing in a virtual environment

Q- What is the likelihood of capturing accurate load testing results in a virtual test environment? We use LoadRunner/PerformanceCenter for performance testing. Our company is in the process of making use of virtualization. It seems this may be ideal for functional test environments, but not for performance test environment. What is your opinion?

A- There are a lot of ways to use virtual environments in your performance testing, so there's no easy answer to this question. I'm assuming that you're referring to hosting the entire application in a virtual environment and running your performance testing against that platform. My answer is that, as always, it depends.
Some research on the topic has found that virtual environments don't scale as well as non-virtual environments. In a study by BlueLock, a company that provides IT infrastructure as a service, they found that "the number of simultaneous users that could be handled by the virtualized server was 14% lower than the number of simultaneous users being handled by the traditional server configuration."
This is consistent with my experience testing financial service applications in virtual environments. If you don't have much choice, or if you have a lot of pressure to make it work, I would recommend that you perform a comparison performance test to prove out the new platform. If you can do that successfully, you'll have some confidence that the platform is comparable. But just be aware that over time, as the application changes and the server configurations change (both virtual and the physical servers in production) your comparison will become outdated. It may happen faster than you might think.
By Mike Kelly

Sunday, September 21, 2008

Prioritizing software testing on little time

Q- Suppose I am testing any Web application and time period is very short and we have a heap of test cases. Which test case should we use first, which can make our Web site more secure and reliable?
Expert’s Response- This question pops up in various forms all the time. It boils down to "We don't have enough time to test everything, so what do we test?" Not having enough time, of course, is not only the status quo for testing software, it is a universal truth for any software that will ever go into production.
Given that, here's my advice.
Start by forgetting that you have any test cases at all.
Make a list (quickly -- remember we don't have enough time to test, so let's not waste what little time we have making lists) of each of the following usage scenarios. I usually limit myself to five on the first pass, but no matter what, move on to the next category as soon as you find yourself thinking about the category you are on. If you have to stop and think, whatever you come up with isn't important enough.
What things will users do most often with this application?
What areas of this application are most likely to contain show-stopping defects?
What parts of this application are critical to the business?
Are any parts of this application governed by legal or regulatory agencies?
What parts of the application would be most embarrassing to the company if broken?
What parts of the application has my boss said must be tested?
Prioritize the list. If you've made the list in a word processor or using note cards, this will take under 60 seconds (if you have to write a new list by hand and you write as slowly as I do, it will probably take a little longer. Here are the rules for prioritizing.
Count the number of times a scenario appears in any of your categories. The more times the scenario appears, the higher the priority.
In case of a tie, 'a' comes before 'b' comes before 'c,' etc.
Now scan your test cases. Note which ones are covered and which ones aren't. On the ones that aren't covered, ask yourself, "Can I live with not testing this?" If the answer is no, add it to the bottom of the list.
Start testing.
If you complete these tests before time is up, do the same exercise again without repeating any usage scenarios. If not, at least you have a defensible list of what you did and did not test and lost all of about 15 minutes of testing time creating that list.
In case you're wondering, this approach is derived from my FIBLOTS heuristic for deciding what usage scenarios to include when developing performance tests. FIBLOTS is an acronym representing the words that complete the sentence "Ensure your performance tests include usage scenarios that are:
Frequent
Intensive
Business critical
Legally enforceable
Obvious
Technically risky
Stakeholder mandated."
I guess for functional testing, it would be "Ensure you test usage scenarios that are:
Frequent
Risky
Business critical
Legally enforceable
Obvious
Stakeholder mandated."
Too bad the acronym FRBLOS isn't as easy to remember as FIBLOTS.

By-Scott Barber

V Model


A framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.

White Box Testing

White box testing deals with the internal logic and structure of the code. White box testing is also called as glass, structural, open box or clear box testing.
The tests written based on the white box testing strategy incorporate ->
Code coverage, branches, paths, statements and internal logic of the code etc. In order to implement white box testing, the tester has to deal with the code and hence is needed to possess knowledge of coding and logic i.e. internal working of the code.
White box test also needs the tester to look into the code and find out which unit/statement/chunk of the code is malfunctioning.

Advantages of White box testing are:

i) As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data can help in testing the application effectively. ii) The other advantage of white box testing is that it helps in optimizing the code iii) It helps in removing the extra lines of code, which can bring in hidden defects.

Disadvantages of white box testing are:

i) As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, which increases the cost. ii) And it is nearly impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of the application.

Types of testing under White/Glass Box Testing.

Unit Testing:

The developer carries out unit testing in order to check if the particular module or unit of code is working fine. The Unit Testing comes at the very basic level as it is carried out as and when the unit of the code is developed or a particular functionality is built.

Static and dynamic Analysis: Static analysis involves going through the code in order to find out any possible defect in the code. Dynamic analysis involves executing the code and analyzing the output.

Statement Coverage: In this type of testing the code is executed in such a manner that every statement of the application is executed at least once. It helps in assuring that all the statements execute without any side effect.

Branch Coverage:

No software application can be written in a continuous mode of coding, at some point we need to branch out the code in order to perform a particular functionality. Branch coverage testing helps in validating of all the branches in the code and making sure that no branching leads to abnormal behavior of the application.

Security Testing:

Security Testing is carried out in order to find out how well the system can protect itself from unauthorized access, hacking – cracking, any code damage etc. which deals with the code of application. This type of testing needs sophisticated testing techniques.

Mutation Testing: A kind of testing in which, the application is tested for the code that was modified after fixing a particular bug/defect. It also helps in finding out which code and which strategy of coding can help in developing the functionality effectively.

Black box Testing

Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing is based on requirements and functionality. And is being tested to derive test cases from the specification.
The system is a black-box whose behavior can only be determined by studying its inputs and the related outputs
Black Box Testing is testing without knowledge of the internal workings of the item being tested. For example, when black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the expected outputs should be, but not how the program actually arrives at those outputs. It is because of this that black box testing can be considered testing with respect to the specifications, no other knowledge of the program (code) is necessary.

Advantages of Black Box Testing
more effective on larger units of code than glass box testing
tester needs no knowledge of implementation, including specific programming languages
tester and programmer are independent of each other
tests are done from a user's point of view
will help to expose any ambiguities or inconsistencies in the specifications
test cases can be designed as soon as the specifications are complete
Disadvantages of Black Box Testing
only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever
without clear and concise specifications, test cases are hard to design
there may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried
may leave many program paths untested
cannot be directed toward specific segments of code which may be very complex (and therefore more error prone)
most testing related research has been directed toward glass box testing.

Manual vs. automated penetration testing

I have a vague understanding of the differences between manual and automated penetration testing, but I don't know which method to use. Are the automated pen tests trustworthy? Should I use both methods?

You should absolutely use both methods, by beginning with automated penetration testing, and supplementing that with manual penetration testing.Automated penetration testing tools tend to be more efficient and thorough, and chances are that malicious hackers are going to use automated attacks against you. These automated test tools come from many sources, including commercial, open-source and custom designed. Often these tools focus on a particular vulnerability area, so multiple penetration testing tools may be needed. Because these automated tools are updated monthly or weekly, you must manually verify the output from the automated tools to check for false alarms and to test for the latest vulnerabilities. With over 50 new vulnerabilities being discovered each week, there will always be new vulnerabilities that the tools may not be able to detect. Without doing this manual testing, your penetration testing will be incomplete.

What is penetration testing

penetration testing::

Penetration testing is the security-oriented probing of a computer system or network to seek out vulnerabilities that an attacker could exploit. The testing process involves an exploration of the all security features of the system in question, followed by an attempt to breech security and penetrate the system. The tester, sometimes known as an ethical hacker, generally uses the same methods and tools as a real attacker. Afterwards, the penetration testers report on the vulnerabilities and suggest steps that should be taken to make the system more secure.

In his article "Knockin' At Your Backdoor," security expert Thomas Rude lists some of the system components that an ethical hacker might explore: areas that could be compromised in the demilitarized zone (DMZ); the possibility of getting into the intranet; the PBX (the enterprise's internal telephone system); and the database. According to Rude, this is far from an exhaustive list, however, because the main criterion for testing is value: if an element of your system is worthy of safe-keeping, its security should be tested regularly.

By:-Sunil Tadwalkar(PMP)

Friday, September 19, 2008

Integration testing: steps toward SOA quality

Integration testing: The process of testing integrated servicesIntegration testing of a service should be performed by the QA/testing team and reviewed by both the architecture lead and development leads. At a minimum integration testing should consist of both verification of the integrated architectural design as it relates to the service under test and validation of integrated services. For each service this would consist of testing the functionality of the service and its relationship with all immediate (directly connected) services.
In our example, integration testing of the cart service would involve testing the cart service functionality and the integration of that service to the catalogue service, customer history service, digital fulfillment service, and the Web-enabled presentation layer. The purpose is to discover and address discrepancies between the functional specification and implementation of the cart service and its contractual responsibilities with other (immediate) services. Once again, this is especially important when implementing SOA.
The integration testing effort should focus on the service undergoing integration testing and its contractual responsibilities with other (immediate) services. There are several reasons for taking this approach, not the least of which is that integration testing of SOA solutions is extremely challenging -- the wider the scope of the integration testing effort, the more challenging it becomes. It useful to focus on the immediate landscape to ensure the contractual obligations are being met by each service and then extend the scope of testing during functional testing. The basic premise is to treat the services as building blocks that compose/support a particular business event or part of an event.
There are several automated SOA testing tools available (commercial and shareware) that help address the testing of services, and there are more traditional testing tools that can be tooled to address SOA testing. Many are able to capture service descriptions and create initial tests based on these descriptions. Those tests can then be automated.

Once you've completed integration testing of closely related services, you can begin true functional testing. This is where the real challenges of testing SOA solutions come to bear and involve the following:
· Third-party services
· Late binding (selection of service)
· Missing/incomplete/changing services
· Multi-platform/Multi-language distributed services

Unit, integration testing first steps toward SOA quality

Unit, integration testing first steps toward SOA quality

SOA -- Unit & integration testing
SOA promotes reuse at the service level rather than at the code/objects level. If you think of each component truly as a service, then there are internal aspects (data and process) and external facing (interface) aspects of the service that need to be tested.
It is convenient to think of the internal aspects of the service in terms of unit testing and to think of testing interface relationships with immediate service partners in terms of integration. It should be noted that unit and integration testing is often ignored or given minimal attention in traditional development environments -- the assumption being that downstream testing will catch any errors before the product reaches production. That is not the case in the world of SOA, where the eventual applications of a service could and often are beyond the control of the development group. Demonstrated adherence to the service design and interface specification is one way to reduce the impact of unexpected downstream implementations of the service.

Friday, September 12, 2008

SOA Driven Testing?

By now, I suspect that most folks who are involved with designing, writing, maintaining and/or supporting software have at least heard of the newest addition to the industry's “buzz-acronym alphabet soup." It's not XP (Extreme Programming), OO(Object Oriented) or even TDD (Test Driven Development). This time the buzz-acronym is SOA (Service Oriented Architecture). And in fashion with many of the more recent buzz-acronyms, the expanded phrase sheds little, if any, light on what the term really means. When I checked in with a few friends of mine to make sure I had my terminology straight, one of them pointed me to Martin Fowler’s Blog where he writes…
“…one question I'm bound to be asked is "what do you think of SOA (Service Oriented Architecture)?" It's a question that's pretty much impossible to answer because SOA means so many different things to different people.
• For some SOA is about exposing software through web services… • For some SOA implies an architecture where applications disappear… • For some SOA is about allowing systems to communicate over some form of standard structure… with other applications… • For some SOA is all about using (mostly) asynchronous messaging to transfer documents between different systems… I've heard people say the nice thing about SOA is that it separates data from process, that it combines data and process, that it uses web standards, that it's independent of web standards, that it's asynchronous, that it's synchronous, that the synchronicity doesn't matter....
I was at Microsoft PDC a couple of years ago. I sat through a day's worth of presentations on SOA -at the end I was on the SOA panel. I played it for laughs by asking if anyone else understood what on earth SOA was. Afterwards someone made the comment that this ambiguity was also something that happened with Object Orientation. There's some truth in that, there were (and are) some divergent views on what OO means. But there's far less Object Ambiguity than the there is Service Oriented Ambiguity…”
Service Oriented Ambiguity?!? No *WONDER* I got confused sometimes while reading the articles in CRN, SD-Times, CTO Source, TechRepublic
Testing Strategy & Management Series - SOA Driven Testing?
© PerfTestPlus, Inc. 2006
1
and others about the technologies behind SOA! This is just one more reason I’m thrilled with my choice to be a tester. Compared to figuring out all of the enabling technologies, testing SOA is a piece of cake. Don’t get me wrong, testing SOA has its challenges, but we at least have some experience with the concepts. Allow me to explain.
SOA Concept
Let's start by taking a look at what SOA is conceptually all about from a tester’s point of view – without creating a paper that is certain to win us a round of “Buzzword Bingo." Ignoring the hype around the phrase and the acronym, SOA is nothing more than the most recent step in the natural evolution of software and software development that started with the GOTO statement. I'm not being cynical; I'm serious! The dreaded GOTO statement started the evolution of abstraction of code decades ago. Some of you may remember being chastised for using the GOTO statement in line numbered BASIC and upgrading to GOSUB-RETURN to save face before becoming empowered by Functions, Procedures and eventually Java Beans or Objects. Even if your programming background doesn't include first-hand experience with that evolution, you probably recognize all of these programming concepts as methods of minimizing code redundancy and abstracting sections of code to maximize code re-use.
This concept of abstraction and code re-use (the basic concept behind what Fowler called the Object Ambiguity) is what paved the way for the software industry to think in terms of not just reusable segments of code but eventually entire mini-applications that can be used in many different contexts to provide the same, or similar, functionality. Possibly the most well known of this breed of mini-application, as I think of them, are those that process credit card purchases over the web.
I'm sure that it's no surprise to anyone reading this article that once you get beyond the service providers and resellers, there are really only a small handful of organizations that actually publish and maintain the vast majority of credit card processing software. In fact, virtually all of us with Web Sites that sell products (often referred to as B2C or Business to Consumer sites) simply “plug-in” to one of those pieces of software (for a small fee, of course) to make our once-innocent web site into an E-Commerce web site! Of course, this particular type of mini-application has its own buzz-term – it's a called a Web Service. Web Services have been around for several years and are actually the direct predecessors, or maybe the earliest adopted subset, of SOA.
For years I struggled with the question of “What’s the difference between a Service and an Object on Steroids?” It took me almost four years to navigate my way through the implementation technologies and coding patterns to figure out that the fundamental difference is that Objects are programmer-centric abstractions of code and Services are user-or business-centric abstractions of code. Basically, a programmer may write code to reference a number of objects that the user is completely unaware of while that user performs an activity, like logging into a secure web site. If, instead, the “log into a secure web site” activity were to be written as a Service, it would be a single entity that accepted certain input and responded with certain output. Not only is the user unaware of the Service, but the programmer writing the application need only be aware of the format and contents of the input and output parameters. In fact, SOA is really nothing more than building software applications in such a manner as to be able to take advantage of Services, whether they are available via the web or the next server down on the rack. Independent of all the ambiguity about technologies, protocols and degrees of abstraction, that is really all there is to SOA.
Testing Strategy & Management Series - SOA Driven Testing?
© PerfTestPlus, Inc. 2006
2
Testing SOA
That said, there are several things about SOA that are going to present challenges that many testers are not used to facing, at least not in the volumes and combinations that we will see with SOA. First, testing Services in an SOA environment is fundamentally different from testing the Objects that inspired them in at least one significant way. Objects were (and are), as we mentioned, programmer-centric segments of code that are likely to be used in more than one area of one or more applications. An object is generally tested directly via unit-tests written by the developer and indirectly by user-acceptance and black-box testers.
Services however, require a different testing approach because they encompass entire business processes and can call dozens of objects and are unlikely to have been developed or tested by anyone you will ever meet or speak to. As testers, we have little choice but to validate the service as a black-box, probably through some kind of test harness, focusing on input values, output values and data format. Sounds a lot like a unit-test, doesn't it?
A New Approach
The next challenge we testers face is that with SOA, we can no longer get away with thinking about applications exclusively from just unit and black-box perspectives. We absolutely must think about SOA applications in (at least) three logical segments: the services themselves, the user interface, and a communication or SOA interface segment (sometimes referred to as a “service broker”). Sounds easy enough, but here's the kicker: we need to test each of these segments both independently and collectively and we need to test each of these segments at both the unit-level as well as a black-box. This means more testers pairing with developers, more testers writing test harnesses, and more automation via API versus UI.
The testing challenges that SOA present that I am most excited about (yes, I am aware that makes me a geek) are the challenges related to performance testing. We as an industry already have enough trouble finding the time and/or the money to performance test the way we'd like, even when we are lucky enough to be part of an organization that thinks about performance testing at all. Now we're intentionally building applications so we can plug in code that we will likely never see that was probably written and is certainly hosted elsewhere on some machine we are unlikely to have access to, that takes our data in and magically spits out the “answer” (we'll assume it's even the “correct answer”). How, exactly, are we to trust that this magic service is going to handle our holiday peak? Even more frightening, how are we going to trust that a whole bunch of these magic services are going to all perform well together like the “well-oiled machine” we'd have built (or would like to believe we'd build) on our own?
Why am I excited about this you ask? No, not because I think these challenges are going to make me rich (though, that would be nice). I'm excited because I think that SOA is going to force the industry to bridge the gap between top-down (black-box/user-experience) performance testing and bottom-up (unit/component/object-level) performance testing that has needed to be bridged for as long as I've been involved with performance testing.
Testing Strategy & Management Series - SOA Driven Testing?
© PerfTestPlus, Inc. 2006
3
Performance Testing as it was Meant To Be
Rather than having to figure out the logical segments for decomposition and recomposition, they have already been defined for us. Rather than having to build test harnesses exclusively for performance testing that no one else has thought of, we can piggy-back on the test harnesses used by the functional and unit testers. Rather than starting from a belief that “We wrote it, therefore it will perform well," we will be starting from a position of “Someone else wrote it and we need validate their performance claims and make sure that it actually works that well with our UI/data/configuration.”
Facing the Challenge
I'm certain that some of these challenges may seem pretty, well, challenging to folks who haven't faced them before, but it’s not completely uncharted territory. Keep your eyes open for more articles, presentations and tools focused on this kind of testing. Pay particular attention to the folks who talk about how testing SOA relates to testing EAI (or EDI for that matter), Middleware and Web Services. They are the ones who have taken on similar challenges before.
Martin Fowler closed the blog I referenced earlier this way:
“…many different (and mostly incompatible) ideas fall under the SOA camp. These do need to be properly described (and named) independently of SOA. I think SOA has turned into a semantics-free concept that can join 'components' and 'architecture'…”
So while the developers, vendors, marketers and architects are sorting out the “Buzz-Acronym Soup" Martin alludes to, we testers can be digging into the concept of testing applications that are segregated cleanly enough for us to effectively apply all of our tester skills at more points in both the application and the development process. As a tester, I really can’t think of a better way to spend my time until we switch to the wave made by the next acronym that sticks!
Acknowledgments
This article was first written in support of a webinar presented by Software Quality Engineering on May 9, 2006.
About the Author
Scott Barber is the CTO of PerfTestPlus (www.PerfTestPlus.com) and Co-Founder of the Workshop on Performance and Reliability (WOPR – www.performance-workshop.org). Scott's particular specialties are testing and analyzing performance for complex systems, developing customized testing methodologies, testing embedded systems, testing biometric identification and security systems, group facilitation and authoring instructional or educational materials. In recognition of his standing as a thought leading performance tester, Scott was invited to be a monthly columnist for Software Test and Performance Magazine in addition to his regular contributions to this and other top software testing print and on-line publications, is regularly invited to participate in industry advancing professional workshops and to present at a wide variety of software development and testing venues. His presentations are well received by industry and academic conferences, college classes, local user groups and individual corporations. Scott is active in his personal mission of improving the state of
Testing Strategy & Management Series - SOA Driven Testing?
© PerfTestPlus, Inc. 2006
4
performance testing across the industry by collaborating with other industry authors, thought leaders and expert practitioners as well as volunteering his time to establish and grow industry organizations.
His tireless dedication to the advancement of software testing in general and specifically performance testing is often referred to as a hobby in addition to a job due to the enjoyment he gains from his efforts.
About PerfTestPlus
PerfTestPlus was founded on the concept of making software testing industry expertise and thought-leadership available to organizations, large and small, who want to push their testing beyond "state-ofthe-practice" to "state-of-the-art." Our founders are dedicated to delivering expert level software-testing-related services in a manner that is both ethical and cost-effective. PerfTestPlus enables individual experts to deliver expert-level services to clients who value true expertise. Rather than trying to find individuals to fit some pre-determined expertise or service offering, PerfTestPlus builds its services around the expertise of its employees. What this means to you is that when you hire an analyst, trainer, mentor or consultant through PerfTestPlus, what you get is someone who is passionate about what you have hired them to do, someone who considers that task to be their specialty, someone who is willing to stake their personal reputation on the quality of their work -not just the reputation of a distant and "faceless" company.

By: R. Scott Barber

Wednesday, September 10, 2008

Unit, integration testing first steps toward SOA quality

Unit, integration testing first steps toward SOA quality

The unit testing challenge with SOA is not so much the actual exercise of unit testing but the speed at which it has to occur.
The service-orientated architecture (SOA) paradigm endeavors to address both the time-to-market challenge faced by business and the need for IT to develop, test, and deploy ever-evolving, complex solutions. SOA breaks complex solutions into component parts (services) that present a simple interface to the application landscape while encapsulating both data and process. These services can be provided in-house, by business partners, and by commercial services.
From a testing perspective, the most significant challenges presented by SOA are the fact that SOA application landscapes are "always on," are continuously changing, are loosely coupled, and usually involve multiple providers (in-house, business partners, and commercial services). Finally the quality of the individual services does not necessarily translate into the overall quality of the business solution -- it is the quality of the whole that truly matters.
Example of an SOA application landscape
Let's look at a simple and rather "coarse" (large complex services) example of an SOA application landscape. In this case, the SOA solution addresses the need to sell digital media online. Service layers consist of a Web-enabled presentation layer, customer account service, catalogue service, cart service, digital fulfillment service, customer history service, and an accounting service that interfaces to a standard financial services database. The following figure illustrates this SOA solution.
From a unit and integration testing perspective we will focus on the "cart service" and its relationship to the presentation layer and immediate service partners. We will later extend this model to address a single business event (customer purchase) in a follow-on article on SOA functional and regression testing. The following figure illustrates the relationship of the cart service to its immediate service partners.
The cart service is loosely coupled to the catalogue, customer history, and digital fulfillment services with a tight coupling to the Web-enabled presentation layer.
SOA -- Unit & integration testing
SOA promotes reuse at the service level rather than at the code/objects level. If you think of each component truly as a service, then there are internal aspects (data and process) and external facing (interface) aspects of the service that need to be tested.
It is convenient to think of the internal aspects of the service in terms of unit testing and to think of testing interface relationships with immediate service partners in terms of integration. It should be noted that unit and integration testing is often ignored or given minimal attention in traditional development environments -- the assumption being that downstream testing will catch any errors before the product reaches production. That is not the case in the world of SOA, where the eventual applications of a service could and often are beyond the control of the development group. Demonstrated adherence to the service design and interface specification is one way to reduce the impact of unexpected downstream implementations of the service.
Unit testing: The process of testing the individual servicesUnit testing of a service should be performed by the developer and should be reviewed by development peers. At a minimum unit testing should consist of both verification of the unit (service) design and validation of the unit (service) implementation.
The purpose of unit testing is to discover and address discrepancies between the specification and implementation. This is especially important when implementing SOA because services are often developed in parallel and undergo continuous development/deployment. That means adherence to design specifications and the ability to effectively encapsulate a service is critical to meeting the contractual obligations of the service. The developer or development organization will have to create stubs to test the interfaces the service supports. This becomes critical once development moves towards integration testing.
The unit testing challenge with SOA is not so much the actual exercise of unit testing but the speed at which it has to occur. There are several well-documented approaches to development and unit testing that will address this challenge, but one that is not often mentioned is the application of Agile development techniques in combination with instrumentation of the code to support continuous integration. Using methodologies and technologies that enable this approach helps address the challenges of continuous development/deployment that are a landmark characteristics of the SOA application landscape.
One other aspect of the SOA application landscape that will impact how unit testing is approached is the extent to which in-house developed and deployed services are employed. The more "in-house" the SOA solution is, the more it can be treated as a component-based solution. Basically that means the pace and extent of any changes are within the control of the development organization.
Integration testing: The process of testing integrated servicesIntegration testing of a service should be performed by the QA/testing team and reviewed by both the architecture lead and development leads. At a minimum integration testing should consist of both verification of the integrated architectural design as it relates to the service under test and validation of integrated services. For each service this would consist of testing the functionality of the service and its relationship with all immediate (directly connected) services.
In our example, integration testing of the cart service would involve testing the cart service functionality and the integration of that service to the catalogue service, customer history service, digital fulfillment service, and the Web-enabled presentation layer. The purpose is to discover and address discrepancies between the functional specification and implementation of the cart service and its contractual responsibilities with other (immediate) services. Once again, this is especially important when implementing SOA.
The integration testing effort should focus on the service undergoing integration testing and its contractual responsibilities with other (immediate) services. There are several reasons for taking this approach, not the least of which is that integration testing of SOA solutions is extremely challenging -- the wider the scope of the integration testing effort, the more challenging it becomes. It useful to focus on the immediate landscape to ensure the contractual obligations are being met by each service and then extend the scope of testing during functional testing. The basic premise is to treat the services as building blocks that compose/support a particular business event or part of an event.
There are several automated SOA testing tools available (commercial and shareware) that help address the testing of services, and there are more traditional testing tools that can be tooled to address SOA testing. Many are able to capture service descriptions and create initial tests based on these descriptions. Those tests can then be automated.
Once you've completed integration testing of closely related services, you can begin true functional testing. This is where the real challenges of testing SOA solutions come to bear and involve the following:
· Third-party services
· Late binding (selection of service)
· Missing/incomplete/changing services
· Multi-platform/Multi-language distributed services

By David W. Johnson

Unit, integration testing first steps toward SOA quality

The unit testing challenge with SOA is not so much the actual exercise of unit testing but the speed at which it has to occur.
The service-orientated architecture (SOA) paradigm endeavors to address both the time-to-market challenge faced by business and the need for IT to develop, test, and deploy ever-evolving, complex solutions. SOA breaks complex solutions into component parts (services) that present a simple interface to the application landscape while encapsulating both data and process. These services can be provided in-house, by business partners, and by commercial services.
From a testing perspective, the most significant challenges presented by SOA are the fact that SOA application landscapes are "always on," are continuously changing, are loosely coupled, and usually involve multiple providers (in-house, business partners, and commercial services). Finally the quality of the individual services does not necessarily translate into the overall quality of the business solution -- it is the quality of the whole that truly matters.
Example of an SOA application landscape
Let's look at a simple and rather "coarse" (large complex services) example of an SOA application landscape. In this case, the SOA solution addresses the need to sell digital media online. Service layers consist of a Web-enabled presentation layer, customer account service, catalogue service, cart service, digital fulfillment service, customer history service, and an accounting service that interfaces to a standard financial services database. The following figure illustrates this SOA solution.
From a unit and integration testing perspective we will focus on the "cart service" and its relationship to the presentation layer and immediate service partners. We will later extend this model to address a single business event (customer purchase) in a follow-on article on SOA functional and regression testing. The following figure illustrates the relationship of the cart service to its immediate service partners.
The cart service is loosely coupled to the catalogue, customer history, and digital fulfillment services with a tight coupling to the Web-enabled presentation layer.
SOA -- Unit & integration testing
SOA promotes reuse at the service level rather than at the code/objects level. If you think of each component truly as a service, then there are internal aspects (data and process) and external facing (interface) aspects of the service that need to be tested.
It is convenient to think of the internal aspects of the service in terms of unit testing and to think of testing interface relationships with immediate service partners in terms of integration. It should be noted that unit and integration testing is often ignored or given minimal attention in traditional development environments -- the assumption being that downstream testing will catch any errors before the product reaches production. That is not the case in the world of SOA, where the eventual applications of a service could and often are beyond the control of the development group. Demonstrated adherence to the service design and interface specification is one way to reduce the impact of unexpected downstream implementations of the service.
Unit testing: The process of testing the individual servicesUnit testing of a service should be performed by the developer and should be reviewed by development peers. At a minimum unit testing should consist of both verification of the unit (service) design and validation of the unit (service) implementation.
The purpose of unit testing is to discover and address discrepancies between the specification and implementation. This is especially important when implementing SOA because services are often developed in parallel and undergo continuous development/deployment. That means adherence to design specifications and the ability to effectively encapsulate a service is critical to meeting the contractual obligations of the service. The developer or development organization will have to create stubs to test the interfaces the service supports. This becomes critical once development moves towards integration testing.
The unit testing challenge with SOA is not so much the actual exercise of unit testing but the speed at which it has to occur. There are several well-documented approaches to development and unit testing that will address this challenge, but one that is not often mentioned is the application of Agile development techniques in combination with instrumentation of the code to support continuous integration. Using methodologies and technologies that enable this approach helps address the challenges of continuous development/deployment that are a landmark characteristics of the SOA application landscape.
One other aspect of the SOA application landscape that will impact how unit testing is approached is the extent to which in-house developed and deployed services are employed. The more "in-house" the SOA solution is, the more it can be treated as a component-based solution. Basically that means the pace and extent of any changes are within the control of the development organization.
Integration testing: The process of testing integrated servicesIntegration testing of a service should be performed by the QA/testing team and reviewed by both the architecture lead and development leads. At a minimum integration testing should consist of both verification of the integrated architectural design as it relates to the service under test and validation of integrated services. For each service this would consist of testing the functionality of the service and its relationship with all immediate (directly connected) services.
In our example, integration testing of the cart service would involve testing the cart service functionality and the integration of that service to the catalogue service, customer history service, digital fulfillment service, and the Web-enabled presentation layer. The purpose is to discover and address discrepancies between the functional specification and implementation of the cart service and its contractual responsibilities with other (immediate) services. Once again, this is especially important when implementing SOA.
The integration testing effort should focus on the service undergoing integration testing and its contractual responsibilities with other (immediate) services. There are several reasons for taking this approach, not the least of which is that integration testing of SOA solutions is extremely challenging -- the wider the scope of the integration testing effort, the more challenging it becomes. It useful to focus on the immediate landscape to ensure the contractual obligations are being met by each service and then extend the scope of testing during functional testing. The basic premise is to treat the services as building blocks that compose/support a particular business event or part of an event.
There are several automated SOA testing tools available (commercial and shareware) that help address the testing of services, and there are more traditional testing tools that can be tooled to address SOA testing. Many are able to capture service descriptions and create initial tests based on these descriptions. Those tests can then be automated.
Once you've completed integration testing of closely related services, you can begin true functional testing. This is where the real challenges of testing SOA solutions come to bear and involve the following:
· Third-party services
· Late binding (selection of service)
· Missing/incomplete/changing services
· Multi-platform/Multi-language distributed services

By David W. Johnson

Thursday, September 4, 2008

Software testing deliverables: From test plans to status reports

There are core sets of test deliverables that are required for any software testing phase: test plan, test case, defect documentation and status report. When taken together this set of deliverables takes the testing team from planning to testing and on through defect remediation and status reporting. This does not represent a definitive set of test deliverables, but it will help any test organization begin the process of determining an appropriate set of deliverables.
One common misconception is that these must be presented as a set of documents, but there are toolsets and applications available that capture the content and intent of these deliverables without creating a document or set of documents. The goal is to capture the required content in a useful and consistent framework as concisely as possible.
Test plan :At a minimum the test plan presents the test: objectives, scope, approach, assumptions, dependencies, risks and schedule for the appropriate test phase or phases. Many test organizations will use the test plan to describe the software testing phases, testing techniques, testing methods and other general aspects of any testing effort. General information around the practice of testing should be kept in a "Best Practices" repository -- testing standards. This avoids redundant and conflicting information from being presented to the reader and keeps the test plan focused on the task at hand –- planning the testing effort. (See "The role of a software test manager".)
Objectives -- mission statementThe objective of the current testing effort needs to be clearly stated and understood by the software testing team and any other organization involved in the deployment. This should not be a sweeping statement on testing the "whole application" -- unless that is actually the goal. Instead the primary testing objectives should relate to the purpose of the current release. If this were a point-of-sale system and the purpose of the current release was to provide enhanced online reporting functionality, then the objective/mission statement could be this:
"To ensure the enhanced online reporting functionality performs to specification and to verify any existing functionality deemed to be in scope."
The test objective describes the "why" of the testing effort. The details of the "what" will be described in the scope portion of the test plan. Once again, any general testing objectives should be documented in the "Best Practices" repository. General or common objectives for any testing effort could include expanding the test case regression suite, documenting new requirements, automating test cases, and updating existing test cases.
In scope:The components of the system to be tested (hardware, software, middleware, etc.) need to be clearly defined as being "in scope." This can take the form of an itemized list of those "in scope": requirements, functional areas, systems, business functions or any aspect of the system that clearly delineates the scope to the testing organization and any other organization involved in the deployment. The "What is to be tested?" question should be answered by the in scope portion of the test plan -- the aspects of the system that will be covered by the current testing effort.
Out of scopeThe components of the system that will not be tested also need to be clearly defined as being "out of scope." This does not mean that these system components will not be executed or exercised; it just means that test cases will not be included that specifically test these system components. The "What is NOT to be tested?" question should be answered by the out of scope portion of the test plan. Often neglected, this part of the test plan begins to deal with the risk-based scheduling that all test organizations must address -- What parts of the system can I afford not to test? The testing approach section of the test plan should address that question.
ApproachThis section defines the testing activities that will be applied against the application for the current testing phase. This addresses how testing will be accomplished against the in scope aspects of the system and any mitigating factors that may reduce the risk of leaving aspects of the system out of scope.
The approach should be viewed as a to-do list that will be fully detailed in the test schedule. The approach should clearly state which aspects of the system are to be tested and how: backup and recovery testing, compatibility/conversion testing, destructive testing, environment testing, interface testing, parallel testing, procedural resting, regression testing, application security testing, storage testing, stress and performance testing, and any other testing approach that is applicable to the current testing effort. The reasoning for using any given set of approaches should be described, usually from the perspective of risk.
In scopeAssumptions are facts, statements and/or expectations of other teams that the test team believes to be true. Assumptions specific to each testing phase should be documented. These are the assumptions upon which the test approach was based. Listed assumptions are also risks should they be incorrect. If any of the assumptions prove not to be true, there may be a negative impact on the testing activities. In any environment there is a common set of assumptions that apply to any given release. These common assumptions should be documented in the "Best Practices" repository; only assumptions unique to the current testing effort and perhaps those common assumptions critical to the current situation should be documented.
DependenciesDependencies are events or milestones that must be completed in order to proceed within any given testing activity. These are the dependencies that will be presented in the test schedule. In this section the events or milestones that are deemed critical to the testing effort should be listed and any potential impact or risks to the testing schedule itemized.
RisksRisks are factors that could negatively impact the testing effort. An itemized list of risks should be drawn up and their potential impact on the testing effort described. Risks that have been itemized in the project plan need not be repeated here unless the impact to the testing effort has not already been clearly stated.
Schedule The test schedule defines when and by whom testing activities will be performed. The information gathered for the body of the test plan is used here in combination with the available resource pool to determine the test schedule. Experience from previous testing efforts along with a detailed understanding of the current testing goals will help make the test schedule as accurate as possible. There are several planning and scheduling tools available that make the plan easier to construct and maintain.
Test caseTest cases are the formal implementation of a test case design. The goal of any given test case or set of test cases is to detect defects in the system being tested. A test case should be documented in a manner that is useful for the current test cycle and any future test cycles. At a bare minimum, each test case should contain the author, name, description, step, expected results and status.
Test case nameThe name or title should contain the essence of the test case, including the functional area and purpose of the test. Using a common naming convention that groups test cases encourages reuse and helps prevents duplicate test cases from occurring.
Test case descriptionThe description should clearly state the sequence of business events to be exercised by the test case. The test case description can apply to one or more test cases; it will often take more than one test case to fully test an area of the application.
Test case stepEach test case step should clearly state the navigation, data and events required to accomplish the step. Using a common descriptive approach encourages conformity and reuse. Keywords offer one of the most effective approaches to test case design and can be applied to both manual and automated test cases.
Expected resultsThe expected results are the expected behavior of the system after any test case step that requires verification or validation. This could include screen pop-ups, data updates, display changes or any other discernable event or transaction on the system that is expected to occur when the test case step is executed.
StatusThis is the operational status of the test case. Is it ready to be executed?
Documenting defectsThe primary purpose of testing is to detect defects in the application before it is released into production. Furthermore, defects are arguably the only product the testing team produces that is seen by the project team. Document defects in a manner that is useful in the defect remediation process. At a bare minimum, each defect should contain the author, name, description, severity, impacted area and status.
Defect nameThe name or title should contain the essence of the defect, including the functional area and nature of the defect.
Defect descriptionThe description should clearly state what sequence of events leads to the defect. When possible include a screenshot or printout of the error.
How to replicateThe defect description should provide sufficient detail for the triage team and the developer fixing the defect to duplicate the defect.
Defect severityThe severity assigned to a defect is dependent on the phase of testing, impact of the defect on the testing effort, and the risk the defect would present to the business if the defect was rolled-out into production.
Impacted areaThe Impacted area can be referenced by functional component or functional area of the system. Often both are used.
Status reportA test organization and members of the testing team will be called upon to create status reports on a daily, weekly, monthly and project basis. The content of any status report should remain focused on the testing objective, scope and scheduled milestones currently being addressed. It is useful to state each of these at the beginning of each status report and then publish the achievements or goals accomplished during the current reporting period, as well as those that will be accomplished during the next reporting period.
Any known risks that will directly impact the testing effort need to be itemized here, especially any "showstoppers" that will prevent any further testing of one or more aspects of the system.
Reporting periodThis is the period covered in the current status report. Include references to any previous status reports that should be reviewed.
Mission statementThe objective of the current testing effort needs to be clearly stated and understood by the testing team and any other organization involved in the deployment.
Current scopeThe components of the system being tested (hardware, software, middleware, etc.) need to be clearly defined as being "in scope," and any related components that are not being tested need to be clearly itemized as "out of scope."
Schedule milestonesAny schedule milestones being worked on during the current reporting period need to be listed and their current status clearly stated. Milestones that were scheduled but not addressed during the current reporting period need to be raised as risks.
RisksRisks are factors that could negatively impact the current testing effort. An itemized list of risks that are currently impacting the testing effort should be drawn up and their impact on the testing effort described.

David W. Johnson

David W. Johnson

Monday, August 25, 2008

Automating regression test cases

Q-How to identify a regression test case?

A - presume that what you are asking how to pick a subset of set of existing test cases to be somehow transformed into regression tests. I have to further assume that the transformation is from a manually executed test case to an automated one. The simple answer is to pick a subset of the manual test cases that exercises what you would consider core functionality of the system that are unlikely to change significantly over time.
Automated regression tests are not very powerful tests. All they really tell you is if something that you thought to program your automation scripts to check for, basically making them an automated change detector. The biggest challenge with this is that these automated scripts tend to be quite fragile, meaning that slight changes in the application will often cause the tests to report "failures" that actually indicate that the script needs to be updated to deal with the change in the application. This is problematic because it often takes more time and effort to maintain these automated regression tests that it would have taken to just execute them manually in the first place.

On top of that, there is very little data to suggest that automated regression tests actually find very many defects. If your testing mission is to find as many of the existing defects as possible, investing in regression testing may not be valuable use of your time. If, however, you have a good business reason to need to demonstrate that some very specific features are available and working (at least superficially), over and over again, on a relatively stable and mature application, investing in automated regression testing may be a wise choice.

By Scott Barber

Automating regression test cases

Q-How to identify a regression test case?

A - presume that what you are asking how to pick a subset of set of existing test cases to be somehow transformed into regression tests. I have to further assume that the transformation is from a manually executed test case to an automated one. The simple answer is to pick a subset of the manual test cases that exercises what you would consider core functionality of the system that are unlikely to change significantly over time.
Automated regression tests are not very powerful tests. All they really tell you is if something that you thought to program your automation scripts to check for, basically making them an automated change detector. The biggest challenge with this is that these automated scripts tend to be quite fragile, meaning that slight changes in the application will often cause the tests to report "failures" that actually indicate that the script needs to be updated to deal with the change in the application. This is problematic because it often takes more time and effort to maintain these automated regression tests that it would have taken to just execute them manually in the first place.

On top of that, there is very little data to suggest that automated regression tests actually find very many defects. If your testing mission is to find as many of the existing defects as possible, investing in regression testing may not be valuable use of your time. If, however, you have a good business reason to need to demonstrate that some very specific features are available and working (at least superficially), over and over again, on a relatively stable and mature application, investing in automated regression testing may be a wise choice.

By Scott Barber

Tuesday, August 19, 2008

News Letter on Testing 19th July 2008 – 18th August 2008

Market Trends in Testing

India all set to rule software testing market India is all set to become a leader in the software testing market with an increasing number of software development companies…(more)

Developer special: Software testing in India Software testing is currently considered among the fastest growing industry segments in India…(more)
Featured Article

Enterprise strategies to improve application testing. Ensuring the accuracy, reliability and quality of your critical business applications has never been more important. Why? Because companies across industries depend on mission-critical enterprise applications to drive their business initiatives. In turn, these applications rely on relational databases to store and manage the underlying enterprise data…(more)

Technologies Update

Security Innovation Launches the "SI Tested" Program Security Innovation the authority in secure software lifecycle management and leading provider of secure software development …(more)

Froglogic Announces Squish for Web Testing Tool Support Version for Newly Released Firefox 3 Squish for Web is a professional functional GUI testing tool …(more)
Case Studies

Automation of Monthly Application Release Testing at a Telecom Service Provider. Leading Telecom operator in Europe. The client offers services under one of the most recognised brand ames in the telecommunications services industry…(more)


Satyam Selects ChangeBASE AOK to Accelerate Application Deployment and Migration a leading global business consulting and information technology services provider, announced today that it has partnered with ChangeBASE…(more)

Events & Conferences

Software Testing Analysis & Review Conference This Conference covers on…(more)

Agile Development Practices 2008 This Conference covers on…(more)
Research & Analytics
Market Trends in Testing

India all set to rule software testing market

Date: 9th August 2008
Source: Economic Times

NEW DELHI: India is all set to become a leader in the software testing market with an increasing number of software development companies outsourcing their software testing work here. Industry analyst firm Gartner has pegged the worldwide software testing market at $13 billion and the global market for outsourced testing services to be around $6.1 billion, of which India is expected to corner a 70% share.

Software testing implies checking any IT system prior to implementation for multiple aspects like functionality, reliability, usability, security, compliance and performance. Market players like Hexaware and AppLabs believe that the need for outsourcing software testing has grown due to the high level of complexity and multiple intersection points in modern software.

“The winning combination of cost, communication, exposure to various domains, testing principles and test tools gives a clear edge to India in software testing,” said Hexaware Technologies global delivery head and chief software architect Ramanan RV. While software services are growing at an average of about 10-12% globally, testing is growing at over 50% every year. The market opportunity for Indian offshore testing companies is seen at around $8 billion by year-end, from $2-3 billion a year ago.

“Indian businesses have matured in terms of making IT central to all business processes. Hence, there is a very high level of business dependence on error-free software code,” said AppLabs president and CEO Makarand Teje.

A global case in point is eBay, which experienced a 22-hour outage of its website in 1999 due to software flaws. It cost eBay $5 million in revenue and an 11% drop in share price. The outage affected 1.2 million customers who were either trying to sell or buy something on the website.

Along with the growth witnessed in offshoring of software testing to India, the average deal size of such projects is also on the rise. A few years ago, the average deal size for an outsourced testing project was about $50,000-60,000, requiring a few testers. That has now grown to about $2-4 million per project.

According to Gartner, India will require around 18,000 testing professionals every year over the next three years to fulfill the demand seen in the software testing market.

Top

Developer special: Software testing in India

Overview of software testing market in India and possible career graphs

Abhigna N G
Date: 24th July 2008
Source: Ciol

BANGALORE, INDIA: Software testing is currently considered among the fastest growing industry segments in India.

This has become possible, largely due to the presence of multinational companies (MNCs) and various other smaller companies, who have expanded their presence in a short span of time, therby, allowing many techies to enter to the area of software testing.

"The outsourceable testing market to India is estimated to be $32 billion and there is 18,000 professionals' shortage in testing industry. We are targeting to tap this potential market," said Mohan Panguluri, vice president, operations, EDI.

As we are aware, the Indian market for software testing industry is estimated to be around $0.8 to $1 billion and the latest survey showed us that we have a shortfall of 15,000 testing professionals. And, another 25,000 software testers are required in the coming years!

This CIOL special on software testing focuses on the software testing industry in India, the types of various testing tools available in the market, the possible career graphs for software testing professionals as compared to software developers, and how opting for a career in software testing is equivalent to software development.

Industry Overview

1. Software testing likely to enter next level of maturity
Integrating, testing and QA along the entire lifecycle of a product is key to employee indulgence and engagement for long term.

2. Qualitree to address software testing needs
As India is poised to become the leader in software testing market with an increasing number of companies outsourcing software testing to India, Qualitree is geared up to meet the needs of the growing market.

3. Application testing has seen a paradigm shift
The conventional approach to testing in being limited to defect tracking has now progressed, feels Mohanram NK.

4. Automation Framework to solve automation process
Manual testing has been replaced with automation to reap the benefits of repeatability, reusability and reliability says, Prabhu Annadurai.

5. Lack of test case management threatens software quality.
According to recent survey only 29 percent of organizations use a test case management application leading 55 percent of organizations use inefficient methods to manage their testing.

Career Guidance

1. Know more about documentation testing
Documentation testing is nothing but testing concerned with the accuracy of documentation.

2. How to get job in Software Testing?
Testing requires in depth knowledge of SDLF, out of box thinking, analytical skill and some programming language skill apart from software testing basics.

Testing Tools

1. Applying model based testing in a UAT context
As the software industry matures, there is a growing need felt to bring some structure into the development phase and its correlation to how testing can be conducted. This article aims to describe how a model based testing approach can be adopted in a Retail Banking UAT context.

2. Have you tried Ranorex Studio?
GUI based software automation framework to simplify both integration and acceptance testing for developers, SQA managers, and testing professionals.

Top
Technologies Update

Security Innovation Launches the "SI Tested" Program

Date: 12th August 2008
Source: iStockAnalyst

Security Innovation (www.securityinnovation.com), the authority in secure software lifecycle management and leading provider of secure software development and software risk solutions, today announced its "SI Tested" program (http:// tested.securityinnovation.com). This new program offers a means of validating a company's security testing efforts in the absence of either industry and/or internal company standards for software security. This validation allows companies, whose software programs undergo Security Innovation's security testing, to display the "SI Tested" logo on their Web site, marketing collateral and other appropriate software packaging. The logo demonstrates how essential software security is to them, their customers, prospects and partners.
"Security is a priority for SupportSoft and our customers. Demonstrating a continuing commitment to security is a business necessity," said Cadir Lee, CTO of SupportSoft. "We selected the 'SI Tested' logo program as a means to validate our efforts for software security. Third-party certification gives our customers confidence that our software security has undergone rigorous review."

The "SI Tested" program is currently available to all companies that wish to establish stringent security standards practices and instill customers with confidence in their products and practices. By proactively demonstrating their commitment and accountability to these security best practices through an independent third-party, organizations that bear the "SI Tested" logo differentiate themselves and gain a valuable marketing asset.

In order to display the logo, Security Innovation must evaluate a company's software. The software must be subjected to a rigorous security testing process that identifies high-severity vulnerabilities. The testing process ensures that critical vulnerabilities are found, and customers are encouraged to repair these vulnerabilities prior to release, in order to protect customers and the security of their data.

Additionally, the program can be customized to facilitate compliance to industry-published lists and guidelines from organizations such as OWASP, WASC and SANS, as well as to internal corporate standards. Companies will receive a report detailing the results of the assessment and can post the results publicly.
This hands-on methodology, developed by Security Innovation and used in these evaluations, has been adopted by universities and organizations across all industries, and offers a more in-depth approach than that taken by remote security scanning tools.

"The security industry needs common and reliable metrics through which to assess software security. As companies become increasingly aware of high-profile security threats, they insist on doing business with vendors that can prove their obligation to security," said Ed Adams, president and CEO of Security Innovation. "Businesses today should not fear security, but instead view it as a differentiator and business enabler. By offering a clear-cut means for companies to evaluate and promote their security practices, Security Innovation is positioning security as a critical component of good business."

Top

Froglogic Announces Squish for Web Testing Tool Support Version for Newly Released Firefox 3

Date: 12th August 2008
Source: Newswire Today

Froglogic GmbH today announced support for testing web applications executed in the newly released Firefox version 3.0 with Squish for Web.

Squish for Web is a professional functional GUI testing tool to create and run automated GUI tests on Web/HTML/Ajax applications.

Squish works on different platforms such as Windows, Linux/Unix, Mac OS X and embedded Linux and supports testing web applications in Microsoft Internet Explorer, Firefox, Mozilla, Safari and Konqueror.

Support for testing web applications running in the new Firefox 3.0 has now been added and is available in the Squish 3.4.1 release.

"One of Squish for Web's main advantages is the possibility to run the same test scripts across all popular Web browsers and platforms to not only automatically testing the web application's functionality but to also automatically ensuring correct behavior across different browsers. Therefor it is important for us to quickly provide our customers with support for new web browser versions as soon as they become available", said Andreas Pakulat, Squish for Web team lead at froglogic.

Squish offers a versatile testing framework for web applications with a choice of popular test scripting languages (Python, JavaScript, Perl and Tcl) extended by test-specific functions, open interfaces, add-ons, integrations into test management systems, a powerful IDE aiding the creation and debugging of tests and a set of command line tools facilitating fully automated test runs.

Squish also offers dedicated support for testing GUI applications based on Java Swing/AWT, Java SWT/RCP, C++ Qt and some more technologies.

Tests created with Squish are cross-platform and browser independent and can be executed in any of the supported browsers without any changes.

Top

Shunra Software Releases Powerful Testing Package Reducing Root Cause Discovery Time from Weeks to Hours

Date: 12th August 2008
Source: iStockAnalyst

Shunra Software, the world's leading provider of network emulation solutions, today announced the release of Shunra VE Application Performance Analysis Package, a software solution available through both the hardware and software WAN emulation solutions offered by the company. This new package delivers a highly integrated solution for performance testing applications in a real-world network environment, identifying business transactions with performance issues and analyzing those transactions to guide performance improvement.
Shunra's VE Application Performance Analysis Package enables rapid identification of specific business transactions which may experience performance problems when running over the production network. During performance testing, automated packet captures are recorded for the transactions of interest. Automated analysis tools then provide insight into the behavior of transactions based on the packet flows involved in the transaction. This level of integration has proven to reduce root cause discovery time from weeks to hours.

The components of this package are generally used in tandem, but may also be used independently of each other depending on the application or service being tested for the network. Keith Lyon, Technical Lead for Enterprise Test Center, the technology center for a Fortune Global 500 consumer products company says that, "The use of Shunra VE products has increased significantly in our organization over the past two years as we find more value in pre-deployment testing over the emulated WAN. We've found that the accuracy of the Shunra emulation tools is at least 95% of our real world network. This accuracy, and the speed at which we are able to get the data, is critical for success and it is something we couldn't get from any other tool."

"The Shunra VE Application Performance and Analysis Package is a critical advancement in Shunra's support of building network-aware applications throughout all stages of the application development lifecycle," said Matt Reid, Shunra VP of Worldwide Marketing. "Clients are seeing increased efficiencies as they provide network performance engineers, QA analysts and application developers within their IT group the ability to avoid costly performance bottlenecks, extensive troubleshooting efforts and potential rollout failures. This new package identifies and analyzes specific business transactions that will cause end-user response time issues and provide actual depictions of real-world network impact, prior to deployment."

"IT organizations, working with additional resource constraints in tandem with increased performance demands, benefit when new tools integrate in a seamless and non-disruptive way," said Olga Yashkova, Analyst, Test & Measurement Practice with analyst firm Frost and Sullivan. "Not only does Shunra's VE Application Performance Analysis Package transform the LAN to a WAN for real world testing into the testing environment; it also uses tools and scripts that currently exist in the lab, allowing IT professionals to rapidly implement WAN emulation with minimal resource demand."

The VE Application Performance Analysis Package may be integrated with Shunra's VE Suite Appliance solution or VE Desktop Professional.

Top

German researchers develop “EmoGlove” to simplify software testing

Date: 11th August 2008
Source: Crunchgear

A research team at the Fraunhofer Institute for Computer Graphics Research in Germany developed a sensor glove that is supposed to help companies evaluate the quality of software applications.

Bodo Urban, the head of the 25 researchers involved in the project, says his basic goal is to improve the relation between man and machine with the so-called EmoGlove. The technology helps to track all movements of the mouse and all keystrokes on the keyboard of a PC. It’s also possible to track eye movements and a person’s facial expressions.

The EmoGlove is also able to measure one’s heart rate, skin resistance and body temperature. Based on this information, the system continously registers a subject’s emotional attitude towards the software application tested. This way, software companies (i.e. game developers) can register if their product is boring, too hard to use or badly designed.

Top

Execom Adopts New Software Testing Approaches

Date: 11th August 2008
Source: Newswire Today

Describes software testing process within Execom and tools, as well as approaches QAEs use.

Development of software testing process within the company has gained on strength during past two years. Today, Execom has a separate testing department with the goal to constantly progress and improve software quality.

Along with HR growth Execom had introduced changes in company organizational structure. Teams have expanded and there was a natural need for establishment of separate departments. One very important team of software testers was formed at the beginning of the year.

At first, software testing was a part of development process where software developers have had the roles of testers. Tasks were being accomplished with high quality, but more complex projects demanded more serious approach. Independent, testing process was begun in 2005.

Today Execom has a separate team – testing department. Quality assurance engineers are gathered around all ongoing projects. Together with developers they improve the quality of deliverables during whole software lifecycle. Testing process is standardized and documenting process is performed in accordance with IEEE829 standard for software testing documentation.

The testing team is very flexible and keen on constant improvement. In order to remain competitive QAEs have begun to adopt Test Management Approach – T Map model as well as the Test Case Management Tool - Test Link.

With a serious approach and strong commitment they are surely headed the right way.

Top

Washington Inventors Develop Computer-Implemented Software Testing System

Date: 9th August 2008
Source: Factiva

ALEXANDRIA, Va., Aug. 9 -- Ibrahim Khalil Ibrahim El Far of Bellevue, Wash., and Ivan Santa Maria Filho of Sammamish, Wash., have developed a software testing system.

According to the U.S. Patent & Trademark Office: "An automated data generation system and methods are provided to facilitate generation of test data sets for computerized platforms while mitigating the need to store massive quantities of potentially invalid test data. In one aspect, a computerized test system is provided. A rules component is provided to specify one or more data domains for a test data set. A data generator employs the rules component to generate the test data set, where the test data set is then employed to test one or more computerized components."

Top

New Portal from Original Software enables better test-team collaboration

Date: 5th August 2008
Source: M2 Presswire

Original Software, the testing solution vendor, today announced the launch of Original Software Manager, a brand new test-asset management portal, providing a single point of entry to the complete Original Software product set.

Using a simple file structure within Windows Explorer, workspaces can be stored locally or put on the network to share between test teams. It allows better organisation of all the assets involved on test projects, from automation, workflow and manual testing tools, to test plans, scripts and action maps, as well as supporting documents and spread sheets. In fact, any kind of item or application can be dragged into these workspace folders, facilitating knowledge sharing and collaboration within the test team.

Colin Armitage, CEO of Original Software said: "The new portal allows testing and QA professionals to manage licences and more easily deal with complex environments where multiple servers and testing solutions may be deployed. With a user friendly interface, it is anticipated that significant time savings will be found in locating files, launching products and utilising assets that may already exist within the team."

Original Software Manager is shipped as a free add-on with all new product purchases or requested as a CD by existing customers.

Top

Applabs to Provide Platform Testing as a Service

Date: 22nd July 2008
Source: Cxotoday

Applabs, a Cmmi level 5 certified product and software testing company, has plans to test the waters in the platform testing as a service model and is developing model platforms' across verticals for the same. The platforms will be essentially useful for those companies who want to outsource quality assurance.

Explaining the concept, Ravi Gurrapadi, head of delivery for the stock exchange and bond market groups at AppLabs said, "This will require us to work with third party vendors. We can focus on bug fixing at an advanced stage, since the protocol being used will already be fixed. We can certify the applications, the software, etc, and by doing so also cover all gateways, and take care of service level agreements (SLAs)."

In the immediate future, Applabs wants to increase its market share in India by 25 %. From the current revenues of $ 80 million, the company has set a target of $ 100 million by end 2008.

Gurrapadi said, "We have developed testing models for stock exchanges. We understand their needs. We have also developed similar models for industries in the telecom, BFSI, retail, off shoring and e-learning sectors. I see a lot of scope in the off shoring segment, which is currently pegged at $ 8 billion in India. We will tap the gaming segment in the near future."

"We are ready for smart tests to stay ahead of the game. Our model based approach also gives us the bandwidth to stay ahead of the competition, and gives us a faster time to market our solutions."

Applabs has done software testing for big stock exchanges globally and also has clients in alternate exchanges.

It studies inherent complexities among industries, notes similar bugs in applications or on software, and also bugs encountered at an alternate level. "Since we have a basic model in place, we are now focusing on plugging alternate bugs. We are looking at closing deals with large enterprises, which are more at risk," said Gurrapadi.

Stating that software testing at the deployment stage is the need of the hour, Gurrapadi said most large organizations are not yet ready for it. "Less than five per cent of the large organizations are prepared. We are educating them. We tell them that they will have to spend 30 % of their IT costs for quality assurance when the system catches the bug, and that it is better to pay the same money before deployment, and start work on a robust system."

Top



Selects ChangeBASE AOK to Accelerate Application Deployment and Migration

Date: 4th August 2008
Source: PR Newswire

HYDERABAD, India, Aug. 4 /PRNewswire-FirstCall/ -- Satyam Computer Services Ltd. , a leading global business consulting and information technology services provider, announced today that it has partnered with ChangeBASE, the London-based maker of the AOK suite of compatibility products. The agreement calls for Satyam to integrate ChangeBASE AOK within its existing services, to provide customers with an accelerated Application Compatibility Testing, Remediation and Packaging solution.

The world-class software, coupled with Satyam's extensive expertise, will automate many of the tasks required to make applications work seamlessly on a chosen platform. The result is a more rapid process for application compatibility reporting and repair issues across a range of platforms, including Microsoft XP and VISTA, as well as a broad range of virtualization environments. It is especially useful in the early phases of a project, because it assesses migration program requirements very quickly and accurately.

"By leveraging ChangeBASE AOK, we can provide a detailed report on the status of a complete application suite before a migration project," said Nick Sharma, the global head of Satyam's Infrastructure Management Practice. "When incorporated into our service suite, the ChangeBASE tools remove many obstacles associated with application deployment, which is a considerable challenge for many organizations. They also solve complex business and technology problems very rapidly, so companies can focus on strategic initiatives."

AOK has enabled Satyam to reduce the time required to package applications for a new environment by 70 percent. Additionally, Satyam is launching several new services that incorporate AOK. These include an application portfolio assessment that produces results in days, rather than months. It also determines applications' readiness to run on new platforms, such as VISTA or Virtualization. Several Satyam clients have reduced the time of such programs by 90 percent. At the same time, service quality and consistency have improved.

Satyam is the first Microsoft Applications Compatibility Factory partner to deploy AOK. At first, it will deploy the tool in its packaging factory, and then on-site operations, enabling clients to benefit from infrastructure management services more quickly and at less cost.

"Satyam is revolutionizing several traditionally manual processes for application compatibility testing and packaging, and creating a new benchmark in application estate management services," Sharma added. "These efforts are resulting in practical, real-world advantages for customers."

"Several organizations have benchmarked our software, and the services Satyam has built around it, and found that it provides significant cost, accuracy, consistency and quality benefits," said ChangeBASE Managing Director John Tate. "We look forward to helping Satyam's ever-growing list of global customers increase their efficiency and serve customers better by managing their applications and taking advantage of optimal OS and virtual platforms with ease."

Top






ConFy or Conformance for You, is an advanced software tester solution developed to meet the needs identified by ISMI

Date: 24th July 2008
Source: Ciol

HYDERABAD, INDIA: Satyam Computer Services Ltd has collaborated with International SEMATECH Manufacturing Initiative (ISMI), a global alliance of the world's major semiconductor manufacturers dedicated to improving productivity and cost performance of equipment and manufacturing operations, to develop ConFy, a next-generation Advanced Software Tester solution that will validate the automation capabilities of advanced semiconductor factory equipment.

The new test solution, designed to ISMI requirements, will emulate semiconductor manufacturing environments, and enable equipment makers to check the conformance of their tools to current and emerging SEMI standards in equipment connectivity and fab manufacturing automation.

ConFy provides highly automated testing of SECS/GEM 300 SEMI standards implementation with minimal user interference and manual interpretation of results. ConFY also features a custom test plan scripter for testing actual production environment, secure test entities and supports multiple load ports, concurrent testing and creation of custom reports.

Satyam's existing customer base of semicon equipment manufacturers and chip manufacturers, and the industry as a whole, will benefit from this industry-leading solution. Advanced testing of standards ensures quicker integration of equipment in chip manufacturing facilities.

The collaboration, in support of these critical industry requirements, provides a solid technical foundation for standards conformance testing.

Top

Featured Article

Enterprise strategies to improve application testing

Ensuring the accuracy, reliability and quality of your critical business applications has never been more important. Why? Because companies across industries depend on mission-critical enterprise applications to drive their business initiatives. In turn, these applications rely on relational databases to store and manage the underlying enterprise data. The ability to enhance, maintain, customize and upgrade these sophisticated applications is critical for achieving long-term business goals. Companies are striving to speed the deployment of reliable, high-quality applications, while staying within tight IT budgets.

Now, more than ever, companies face new challenges when designing effective and efficient testing strategies for enterprise applications. Incomplete or flawed test data means inaccurate testing, which can lead to application failure and business disruption. More common approaches to building test environments include cloning application production environments and writing custom extract programs. However, these methods can be labor intensive, error prone and costly. No company wants to risk losing customers, market share, brand equity or revenue by delivering applications that have not been thoroughly tested. For this reason, end-to-end application testing is a strategic priority throughout the application development lifecycle.

So, how can IT organizations improve testing efficiencies and reduce the total cost of owning and maintaining enterprise applications? This white paper examines how proven test data management capabilities can help you deliver reliable applications and achieve maximum business value from your application investment.

Top

Case Studies:

Case Study: Caritor

Automation of Monthly Application Release Testing at a Telecom Service Provider

Client Overview

Leading Telecom operator in Europe. The client offers services under one of the most recognised brand names in the telecommunications services industry.

Business Context

The client has a wide range of applications, addressing different functional areas such as Customer Management, Rating and Billing, Operational Reporting, Credit Risk Analysis, and Fraud Detection. There are continuous changes to these live applications, either due to system fixes or due to new projects impacting the existing applications.

Release testing, comprises of testing various applications on a monthly basis in order to ensure that both the system fixes and new projects would work fine and also to ensure that, the unchanged functionality of the existing application is not impacted by the changes.

In order to ensure that the unchanged functionality is not affected, regression test packs were developed for each of the applications and the tests were conducted manually.

Challenge

Test cases were being written each month in order to test changes to the existing applications. These test cases would then become part of the regression pack. Thus, over a period of time the regression test pack has ended up with so many test cases, that to test all of them manually in the short time frames became virtually impossible.

The onsite team sent across the manual regression test cases, which were used for automation. Appropriate coding standards were used to code the automation scripts. The scripts underwent reviews by the offshore and onsite team members. The scripts thus developed were tested and sent back to onsite for some static testing and validations. Caritor’s test team then maintains the automated regression test scripts on an ongoing basis.

Some notable challenges faced during the automation project were:
· Since many of the applications were deployed on UNIX servers, they had to be tested from Windows NT client machines
· One of the applications required frequent switching between UNIX back-end applications and JAVA front-end. Caritor used semi-keyword approach to meet this requirement where in functions were specified in the data tables. Automated regression test scripts have been created and are being maintained by Caritor’s test team on an ongoing basis.

Automated regression test scripts have been created and are being maintained by Caritor’s test team on an ongoing basis.

How Caritor Helped

Caritor proposed a solution, which involved automating the execution of monthly release testing. The reasons for choosing an automation approach included:

· Reduced execution time
· Reduced number of resources needed (over a period of time)
· Ability to execute the automated scripts in various environments
· Repetition of the tests for the unchanged functionality without errors
· Easy identification of GUI changes
· Unified Report for the test

Caritor leveraged an onsite offshore model for this initiative with 75% of the test team located offshore. The tools being used were Mercury Test Director and Winrunner. The test requirements were specified in Test Director and Caritor’s test team interfaced with the Test Director and Winrunner to obtain and automate the test requirements.

As part of the “automation of monthly release testing”, Caritor’s test team took on the responsibility to create, maintain and run the automated test scripts in tandem with the manual test scripts.

Client Benefits

The client derived several benefits as a result of this initiative:

· Reduced Time to Market – Automation resulted in reducing the time taken to complete the monthly release testing, while bringing new features and services to market
· Increased Effectiveness of Testing – Reduction in errors caused reduced defect slippage to production
· Accessing technically skilled offshore resources enabled the client derive significant costs benefits
· Automation will complement the testing that will be performed as part of the manual test suite

Top

Events & Conferences:

Software Testing Analysis & Review Conference

Date: 29th September to 3rd October 2008
Place: Anaheim, CA • Disneyland® Hotel

The Top Ten Reasons to Attend STARWEST

1. Over 100 learning sessions: tutorials, keynotes, conference sessions, bonus sessions, and more
2. In-depth tutorials, half- and full-day options—double the number of classes from last year
3. Cutting-edge testing answers from top testing experts
4. Presentations from highly experienced testing professionals
5. Networking opportunities with your peers in the industry
6. Special events—welcome reception, bookstore, meet the speakers, and more
7. The largest testing EXPO anywhere
8. Group discounts—bring your whole team
9. The perfect balance of learning and fun in Southern California
10. All this at the happiest place on Earth—Disneyland® Hotel!