Sponsered Links
Monday, August 25, 2008
Automating regression test cases
A - presume that what you are asking how to pick a subset of set of existing test cases to be somehow transformed into regression tests. I have to further assume that the transformation is from a manually executed test case to an automated one. The simple answer is to pick a subset of the manual test cases that exercises what you would consider core functionality of the system that are unlikely to change significantly over time.
Automated regression tests are not very powerful tests. All they really tell you is if something that you thought to program your automation scripts to check for, basically making them an automated change detector. The biggest challenge with this is that these automated scripts tend to be quite fragile, meaning that slight changes in the application will often cause the tests to report "failures" that actually indicate that the script needs to be updated to deal with the change in the application. This is problematic because it often takes more time and effort to maintain these automated regression tests that it would have taken to just execute them manually in the first place.
On top of that, there is very little data to suggest that automated regression tests actually find very many defects. If your testing mission is to find as many of the existing defects as possible, investing in regression testing may not be valuable use of your time. If, however, you have a good business reason to need to demonstrate that some very specific features are available and working (at least superficially), over and over again, on a relatively stable and mature application, investing in automated regression testing may be a wise choice.
By Scott Barber
Automating regression test cases
A - presume that what you are asking how to pick a subset of set of existing test cases to be somehow transformed into regression tests. I have to further assume that the transformation is from a manually executed test case to an automated one. The simple answer is to pick a subset of the manual test cases that exercises what you would consider core functionality of the system that are unlikely to change significantly over time.
Automated regression tests are not very powerful tests. All they really tell you is if something that you thought to program your automation scripts to check for, basically making them an automated change detector. The biggest challenge with this is that these automated scripts tend to be quite fragile, meaning that slight changes in the application will often cause the tests to report "failures" that actually indicate that the script needs to be updated to deal with the change in the application. This is problematic because it often takes more time and effort to maintain these automated regression tests that it would have taken to just execute them manually in the first place.
On top of that, there is very little data to suggest that automated regression tests actually find very many defects. If your testing mission is to find as many of the existing defects as possible, investing in regression testing may not be valuable use of your time. If, however, you have a good business reason to need to demonstrate that some very specific features are available and working (at least superficially), over and over again, on a relatively stable and mature application, investing in automated regression testing may be a wise choice.
By Scott Barber
Tuesday, August 19, 2008
News Letter on Testing 19th July 2008 – 18th August 2008
India all set to rule software testing market India is all set to become a leader in the software testing market with an increasing number of software development companies…(more)
Developer special: Software testing in India Software testing is currently considered among the fastest growing industry segments in India…(more)
Featured Article
Enterprise strategies to improve application testing. Ensuring the accuracy, reliability and quality of your critical business applications has never been more important. Why? Because companies across industries depend on mission-critical enterprise applications to drive their business initiatives. In turn, these applications rely on relational databases to store and manage the underlying enterprise data…(more)
Technologies Update
Security Innovation Launches the "SI Tested" Program Security Innovation the authority in secure software lifecycle management and leading provider of secure software development …(more)
Froglogic Announces Squish for Web Testing Tool Support Version for Newly Released Firefox 3 Squish for Web is a professional functional GUI testing tool …(more)
Case Studies
Automation of Monthly Application Release Testing at a Telecom Service Provider. Leading Telecom operator in Europe. The client offers services under one of the most recognised brand ames in the telecommunications services industry…(more)
Satyam Selects ChangeBASE AOK to Accelerate Application Deployment and Migration a leading global business consulting and information technology services provider, announced today that it has partnered with ChangeBASE…(more)
Events & Conferences
Software Testing Analysis & Review Conference This Conference covers on…(more)
Agile Development Practices 2008 This Conference covers on…(more)
Research & Analytics
Market Trends in Testing
India all set to rule software testing market
Date: 9th August 2008
Source: Economic Times
NEW DELHI: India is all set to become a leader in the software testing market with an increasing number of software development companies outsourcing their software testing work here. Industry analyst firm Gartner has pegged the worldwide software testing market at $13 billion and the global market for outsourced testing services to be around $6.1 billion, of which India is expected to corner a 70% share.
Software testing implies checking any IT system prior to implementation for multiple aspects like functionality, reliability, usability, security, compliance and performance. Market players like Hexaware and AppLabs believe that the need for outsourcing software testing has grown due to the high level of complexity and multiple intersection points in modern software.
“The winning combination of cost, communication, exposure to various domains, testing principles and test tools gives a clear edge to India in software testing,” said Hexaware Technologies global delivery head and chief software architect Ramanan RV. While software services are growing at an average of about 10-12% globally, testing is growing at over 50% every year. The market opportunity for Indian offshore testing companies is seen at around $8 billion by year-end, from $2-3 billion a year ago.
“Indian businesses have matured in terms of making IT central to all business processes. Hence, there is a very high level of business dependence on error-free software code,” said AppLabs president and CEO Makarand Teje.
A global case in point is eBay, which experienced a 22-hour outage of its website in 1999 due to software flaws. It cost eBay $5 million in revenue and an 11% drop in share price. The outage affected 1.2 million customers who were either trying to sell or buy something on the website.
Along with the growth witnessed in offshoring of software testing to India, the average deal size of such projects is also on the rise. A few years ago, the average deal size for an outsourced testing project was about $50,000-60,000, requiring a few testers. That has now grown to about $2-4 million per project.
According to Gartner, India will require around 18,000 testing professionals every year over the next three years to fulfill the demand seen in the software testing market.
Top
Developer special: Software testing in India
Overview of software testing market in India and possible career graphs
Abhigna N G
Date: 24th July 2008
Source: Ciol
BANGALORE, INDIA: Software testing is currently considered among the fastest growing industry segments in India.
This has become possible, largely due to the presence of multinational companies (MNCs) and various other smaller companies, who have expanded their presence in a short span of time, therby, allowing many techies to enter to the area of software testing.
"The outsourceable testing market to India is estimated to be $32 billion and there is 18,000 professionals' shortage in testing industry. We are targeting to tap this potential market," said Mohan Panguluri, vice president, operations, EDI.
As we are aware, the Indian market for software testing industry is estimated to be around $0.8 to $1 billion and the latest survey showed us that we have a shortfall of 15,000 testing professionals. And, another 25,000 software testers are required in the coming years!
This CIOL special on software testing focuses on the software testing industry in India, the types of various testing tools available in the market, the possible career graphs for software testing professionals as compared to software developers, and how opting for a career in software testing is equivalent to software development.
Industry Overview
1. Software testing likely to enter next level of maturity
Integrating, testing and QA along the entire lifecycle of a product is key to employee indulgence and engagement for long term.
2. Qualitree to address software testing needs
As India is poised to become the leader in software testing market with an increasing number of companies outsourcing software testing to India, Qualitree is geared up to meet the needs of the growing market.
3. Application testing has seen a paradigm shift
The conventional approach to testing in being limited to defect tracking has now progressed, feels Mohanram NK.
4. Automation Framework to solve automation process
Manual testing has been replaced with automation to reap the benefits of repeatability, reusability and reliability says, Prabhu Annadurai.
5. Lack of test case management threatens software quality.
According to recent survey only 29 percent of organizations use a test case management application leading 55 percent of organizations use inefficient methods to manage their testing.
Career Guidance
1. Know more about documentation testing
Documentation testing is nothing but testing concerned with the accuracy of documentation.
2. How to get job in Software Testing?
Testing requires in depth knowledge of SDLF, out of box thinking, analytical skill and some programming language skill apart from software testing basics.
Testing Tools
1. Applying model based testing in a UAT context
As the software industry matures, there is a growing need felt to bring some structure into the development phase and its correlation to how testing can be conducted. This article aims to describe how a model based testing approach can be adopted in a Retail Banking UAT context.
2. Have you tried Ranorex Studio?
GUI based software automation framework to simplify both integration and acceptance testing for developers, SQA managers, and testing professionals.
Top
Technologies Update
Security Innovation Launches the "SI Tested" Program
Date: 12th August 2008
Source: iStockAnalyst
Security Innovation (www.securityinnovation.com), the authority in secure software lifecycle management and leading provider of secure software development and software risk solutions, today announced its "SI Tested" program (http:// tested.securityinnovation.com). This new program offers a means of validating a company's security testing efforts in the absence of either industry and/or internal company standards for software security. This validation allows companies, whose software programs undergo Security Innovation's security testing, to display the "SI Tested" logo on their Web site, marketing collateral and other appropriate software packaging. The logo demonstrates how essential software security is to them, their customers, prospects and partners.
"Security is a priority for SupportSoft and our customers. Demonstrating a continuing commitment to security is a business necessity," said Cadir Lee, CTO of SupportSoft. "We selected the 'SI Tested' logo program as a means to validate our efforts for software security. Third-party certification gives our customers confidence that our software security has undergone rigorous review."
The "SI Tested" program is currently available to all companies that wish to establish stringent security standards practices and instill customers with confidence in their products and practices. By proactively demonstrating their commitment and accountability to these security best practices through an independent third-party, organizations that bear the "SI Tested" logo differentiate themselves and gain a valuable marketing asset.
In order to display the logo, Security Innovation must evaluate a company's software. The software must be subjected to a rigorous security testing process that identifies high-severity vulnerabilities. The testing process ensures that critical vulnerabilities are found, and customers are encouraged to repair these vulnerabilities prior to release, in order to protect customers and the security of their data.
Additionally, the program can be customized to facilitate compliance to industry-published lists and guidelines from organizations such as OWASP, WASC and SANS, as well as to internal corporate standards. Companies will receive a report detailing the results of the assessment and can post the results publicly.
This hands-on methodology, developed by Security Innovation and used in these evaluations, has been adopted by universities and organizations across all industries, and offers a more in-depth approach than that taken by remote security scanning tools.
"The security industry needs common and reliable metrics through which to assess software security. As companies become increasingly aware of high-profile security threats, they insist on doing business with vendors that can prove their obligation to security," said Ed Adams, president and CEO of Security Innovation. "Businesses today should not fear security, but instead view it as a differentiator and business enabler. By offering a clear-cut means for companies to evaluate and promote their security practices, Security Innovation is positioning security as a critical component of good business."
Top
Froglogic Announces Squish for Web Testing Tool Support Version for Newly Released Firefox 3
Date: 12th August 2008
Source: Newswire Today
Froglogic GmbH today announced support for testing web applications executed in the newly released Firefox version 3.0 with Squish for Web.
Squish for Web is a professional functional GUI testing tool to create and run automated GUI tests on Web/HTML/Ajax applications.
Squish works on different platforms such as Windows, Linux/Unix, Mac OS X and embedded Linux and supports testing web applications in Microsoft Internet Explorer, Firefox, Mozilla, Safari and Konqueror.
Support for testing web applications running in the new Firefox 3.0 has now been added and is available in the Squish 3.4.1 release.
"One of Squish for Web's main advantages is the possibility to run the same test scripts across all popular Web browsers and platforms to not only automatically testing the web application's functionality but to also automatically ensuring correct behavior across different browsers. Therefor it is important for us to quickly provide our customers with support for new web browser versions as soon as they become available", said Andreas Pakulat, Squish for Web team lead at froglogic.
Squish offers a versatile testing framework for web applications with a choice of popular test scripting languages (Python, JavaScript, Perl and Tcl) extended by test-specific functions, open interfaces, add-ons, integrations into test management systems, a powerful IDE aiding the creation and debugging of tests and a set of command line tools facilitating fully automated test runs.
Squish also offers dedicated support for testing GUI applications based on Java Swing/AWT, Java SWT/RCP, C++ Qt and some more technologies.
Tests created with Squish are cross-platform and browser independent and can be executed in any of the supported browsers without any changes.
Top
Shunra Software Releases Powerful Testing Package Reducing Root Cause Discovery Time from Weeks to Hours
Date: 12th August 2008
Source: iStockAnalyst
Shunra Software, the world's leading provider of network emulation solutions, today announced the release of Shunra VE Application Performance Analysis Package, a software solution available through both the hardware and software WAN emulation solutions offered by the company. This new package delivers a highly integrated solution for performance testing applications in a real-world network environment, identifying business transactions with performance issues and analyzing those transactions to guide performance improvement.
Shunra's VE Application Performance Analysis Package enables rapid identification of specific business transactions which may experience performance problems when running over the production network. During performance testing, automated packet captures are recorded for the transactions of interest. Automated analysis tools then provide insight into the behavior of transactions based on the packet flows involved in the transaction. This level of integration has proven to reduce root cause discovery time from weeks to hours.
The components of this package are generally used in tandem, but may also be used independently of each other depending on the application or service being tested for the network. Keith Lyon, Technical Lead for Enterprise Test Center, the technology center for a Fortune Global 500 consumer products company says that, "The use of Shunra VE products has increased significantly in our organization over the past two years as we find more value in pre-deployment testing over the emulated WAN. We've found that the accuracy of the Shunra emulation tools is at least 95% of our real world network. This accuracy, and the speed at which we are able to get the data, is critical for success and it is something we couldn't get from any other tool."
"The Shunra VE Application Performance and Analysis Package is a critical advancement in Shunra's support of building network-aware applications throughout all stages of the application development lifecycle," said Matt Reid, Shunra VP of Worldwide Marketing. "Clients are seeing increased efficiencies as they provide network performance engineers, QA analysts and application developers within their IT group the ability to avoid costly performance bottlenecks, extensive troubleshooting efforts and potential rollout failures. This new package identifies and analyzes specific business transactions that will cause end-user response time issues and provide actual depictions of real-world network impact, prior to deployment."
"IT organizations, working with additional resource constraints in tandem with increased performance demands, benefit when new tools integrate in a seamless and non-disruptive way," said Olga Yashkova, Analyst, Test & Measurement Practice with analyst firm Frost and Sullivan. "Not only does Shunra's VE Application Performance Analysis Package transform the LAN to a WAN for real world testing into the testing environment; it also uses tools and scripts that currently exist in the lab, allowing IT professionals to rapidly implement WAN emulation with minimal resource demand."
The VE Application Performance Analysis Package may be integrated with Shunra's VE Suite Appliance solution or VE Desktop Professional.
Top
German researchers develop “EmoGlove” to simplify software testing
Date: 11th August 2008
Source: Crunchgear
A research team at the Fraunhofer Institute for Computer Graphics Research in Germany developed a sensor glove that is supposed to help companies evaluate the quality of software applications.
Bodo Urban, the head of the 25 researchers involved in the project, says his basic goal is to improve the relation between man and machine with the so-called EmoGlove. The technology helps to track all movements of the mouse and all keystrokes on the keyboard of a PC. It’s also possible to track eye movements and a person’s facial expressions.
The EmoGlove is also able to measure one’s heart rate, skin resistance and body temperature. Based on this information, the system continously registers a subject’s emotional attitude towards the software application tested. This way, software companies (i.e. game developers) can register if their product is boring, too hard to use or badly designed.
Top
Execom Adopts New Software Testing Approaches
Date: 11th August 2008
Source: Newswire Today
Describes software testing process within Execom and tools, as well as approaches QAEs use.
Development of software testing process within the company has gained on strength during past two years. Today, Execom has a separate testing department with the goal to constantly progress and improve software quality.
Along with HR growth Execom had introduced changes in company organizational structure. Teams have expanded and there was a natural need for establishment of separate departments. One very important team of software testers was formed at the beginning of the year.
At first, software testing was a part of development process where software developers have had the roles of testers. Tasks were being accomplished with high quality, but more complex projects demanded more serious approach. Independent, testing process was begun in 2005.
Today Execom has a separate team – testing department. Quality assurance engineers are gathered around all ongoing projects. Together with developers they improve the quality of deliverables during whole software lifecycle. Testing process is standardized and documenting process is performed in accordance with IEEE829 standard for software testing documentation.
The testing team is very flexible and keen on constant improvement. In order to remain competitive QAEs have begun to adopt Test Management Approach – T Map model as well as the Test Case Management Tool - Test Link.
With a serious approach and strong commitment they are surely headed the right way.
Top
Washington Inventors Develop Computer-Implemented Software Testing System
Date: 9th August 2008
Source: Factiva
ALEXANDRIA, Va., Aug. 9 -- Ibrahim Khalil Ibrahim El Far of Bellevue, Wash., and Ivan Santa Maria Filho of Sammamish, Wash., have developed a software testing system.
According to the U.S. Patent & Trademark Office: "An automated data generation system and methods are provided to facilitate generation of test data sets for computerized platforms while mitigating the need to store massive quantities of potentially invalid test data. In one aspect, a computerized test system is provided. A rules component is provided to specify one or more data domains for a test data set. A data generator employs the rules component to generate the test data set, where the test data set is then employed to test one or more computerized components."
Top
New Portal from Original Software enables better test-team collaboration
Date: 5th August 2008
Source: M2 Presswire
Original Software, the testing solution vendor, today announced the launch of Original Software Manager, a brand new test-asset management portal, providing a single point of entry to the complete Original Software product set.
Using a simple file structure within Windows Explorer, workspaces can be stored locally or put on the network to share between test teams. It allows better organisation of all the assets involved on test projects, from automation, workflow and manual testing tools, to test plans, scripts and action maps, as well as supporting documents and spread sheets. In fact, any kind of item or application can be dragged into these workspace folders, facilitating knowledge sharing and collaboration within the test team.
Colin Armitage, CEO of Original Software said: "The new portal allows testing and QA professionals to manage licences and more easily deal with complex environments where multiple servers and testing solutions may be deployed. With a user friendly interface, it is anticipated that significant time savings will be found in locating files, launching products and utilising assets that may already exist within the team."
Original Software Manager is shipped as a free add-on with all new product purchases or requested as a CD by existing customers.
Top
Applabs to Provide Platform Testing as a Service
Date: 22nd July 2008
Source: Cxotoday
Applabs, a Cmmi level 5 certified product and software testing company, has plans to test the waters in the platform testing as a service model and is developing model platforms' across verticals for the same. The platforms will be essentially useful for those companies who want to outsource quality assurance.
Explaining the concept, Ravi Gurrapadi, head of delivery for the stock exchange and bond market groups at AppLabs said, "This will require us to work with third party vendors. We can focus on bug fixing at an advanced stage, since the protocol being used will already be fixed. We can certify the applications, the software, etc, and by doing so also cover all gateways, and take care of service level agreements (SLAs)."
In the immediate future, Applabs wants to increase its market share in India by 25 %. From the current revenues of $ 80 million, the company has set a target of $ 100 million by end 2008.
Gurrapadi said, "We have developed testing models for stock exchanges. We understand their needs. We have also developed similar models for industries in the telecom, BFSI, retail, off shoring and e-learning sectors. I see a lot of scope in the off shoring segment, which is currently pegged at $ 8 billion in India. We will tap the gaming segment in the near future."
"We are ready for smart tests to stay ahead of the game. Our model based approach also gives us the bandwidth to stay ahead of the competition, and gives us a faster time to market our solutions."
Applabs has done software testing for big stock exchanges globally and also has clients in alternate exchanges.
It studies inherent complexities among industries, notes similar bugs in applications or on software, and also bugs encountered at an alternate level. "Since we have a basic model in place, we are now focusing on plugging alternate bugs. We are looking at closing deals with large enterprises, which are more at risk," said Gurrapadi.
Stating that software testing at the deployment stage is the need of the hour, Gurrapadi said most large organizations are not yet ready for it. "Less than five per cent of the large organizations are prepared. We are educating them. We tell them that they will have to spend 30 % of their IT costs for quality assurance when the system catches the bug, and that it is better to pay the same money before deployment, and start work on a robust system."
Top
Selects ChangeBASE AOK to Accelerate Application Deployment and Migration
Date: 4th August 2008
Source: PR Newswire
HYDERABAD, India, Aug. 4 /PRNewswire-FirstCall/ -- Satyam Computer Services Ltd. , a leading global business consulting and information technology services provider, announced today that it has partnered with ChangeBASE, the London-based maker of the AOK suite of compatibility products. The agreement calls for Satyam to integrate ChangeBASE AOK within its existing services, to provide customers with an accelerated Application Compatibility Testing, Remediation and Packaging solution.
The world-class software, coupled with Satyam's extensive expertise, will automate many of the tasks required to make applications work seamlessly on a chosen platform. The result is a more rapid process for application compatibility reporting and repair issues across a range of platforms, including Microsoft XP and VISTA, as well as a broad range of virtualization environments. It is especially useful in the early phases of a project, because it assesses migration program requirements very quickly and accurately.
"By leveraging ChangeBASE AOK, we can provide a detailed report on the status of a complete application suite before a migration project," said Nick Sharma, the global head of Satyam's Infrastructure Management Practice. "When incorporated into our service suite, the ChangeBASE tools remove many obstacles associated with application deployment, which is a considerable challenge for many organizations. They also solve complex business and technology problems very rapidly, so companies can focus on strategic initiatives."
AOK has enabled Satyam to reduce the time required to package applications for a new environment by 70 percent. Additionally, Satyam is launching several new services that incorporate AOK. These include an application portfolio assessment that produces results in days, rather than months. It also determines applications' readiness to run on new platforms, such as VISTA or Virtualization. Several Satyam clients have reduced the time of such programs by 90 percent. At the same time, service quality and consistency have improved.
Satyam is the first Microsoft Applications Compatibility Factory partner to deploy AOK. At first, it will deploy the tool in its packaging factory, and then on-site operations, enabling clients to benefit from infrastructure management services more quickly and at less cost.
"Satyam is revolutionizing several traditionally manual processes for application compatibility testing and packaging, and creating a new benchmark in application estate management services," Sharma added. "These efforts are resulting in practical, real-world advantages for customers."
"Several organizations have benchmarked our software, and the services Satyam has built around it, and found that it provides significant cost, accuracy, consistency and quality benefits," said ChangeBASE Managing Director John Tate. "We look forward to helping Satyam's ever-growing list of global customers increase their efficiency and serve customers better by managing their applications and taking advantage of optimal OS and virtual platforms with ease."
Top
ConFy or Conformance for You, is an advanced software tester solution developed to meet the needs identified by ISMI
Date: 24th July 2008
Source: Ciol
HYDERABAD, INDIA: Satyam Computer Services Ltd has collaborated with International SEMATECH Manufacturing Initiative (ISMI), a global alliance of the world's major semiconductor manufacturers dedicated to improving productivity and cost performance of equipment and manufacturing operations, to develop ConFy, a next-generation Advanced Software Tester solution that will validate the automation capabilities of advanced semiconductor factory equipment.
The new test solution, designed to ISMI requirements, will emulate semiconductor manufacturing environments, and enable equipment makers to check the conformance of their tools to current and emerging SEMI standards in equipment connectivity and fab manufacturing automation.
ConFy provides highly automated testing of SECS/GEM 300 SEMI standards implementation with minimal user interference and manual interpretation of results. ConFY also features a custom test plan scripter for testing actual production environment, secure test entities and supports multiple load ports, concurrent testing and creation of custom reports.
Satyam's existing customer base of semicon equipment manufacturers and chip manufacturers, and the industry as a whole, will benefit from this industry-leading solution. Advanced testing of standards ensures quicker integration of equipment in chip manufacturing facilities.
The collaboration, in support of these critical industry requirements, provides a solid technical foundation for standards conformance testing.
Top
Featured Article
Enterprise strategies to improve application testing
Ensuring the accuracy, reliability and quality of your critical business applications has never been more important. Why? Because companies across industries depend on mission-critical enterprise applications to drive their business initiatives. In turn, these applications rely on relational databases to store and manage the underlying enterprise data. The ability to enhance, maintain, customize and upgrade these sophisticated applications is critical for achieving long-term business goals. Companies are striving to speed the deployment of reliable, high-quality applications, while staying within tight IT budgets.
Now, more than ever, companies face new challenges when designing effective and efficient testing strategies for enterprise applications. Incomplete or flawed test data means inaccurate testing, which can lead to application failure and business disruption. More common approaches to building test environments include cloning application production environments and writing custom extract programs. However, these methods can be labor intensive, error prone and costly. No company wants to risk losing customers, market share, brand equity or revenue by delivering applications that have not been thoroughly tested. For this reason, end-to-end application testing is a strategic priority throughout the application development lifecycle.
So, how can IT organizations improve testing efficiencies and reduce the total cost of owning and maintaining enterprise applications? This white paper examines how proven test data management capabilities can help you deliver reliable applications and achieve maximum business value from your application investment.
Top
Case Studies:
Case Study: Caritor
Automation of Monthly Application Release Testing at a Telecom Service Provider
Client Overview
Leading Telecom operator in Europe. The client offers services under one of the most recognised brand names in the telecommunications services industry.
Business Context
The client has a wide range of applications, addressing different functional areas such as Customer Management, Rating and Billing, Operational Reporting, Credit Risk Analysis, and Fraud Detection. There are continuous changes to these live applications, either due to system fixes or due to new projects impacting the existing applications.
Release testing, comprises of testing various applications on a monthly basis in order to ensure that both the system fixes and new projects would work fine and also to ensure that, the unchanged functionality of the existing application is not impacted by the changes.
In order to ensure that the unchanged functionality is not affected, regression test packs were developed for each of the applications and the tests were conducted manually.
Challenge
Test cases were being written each month in order to test changes to the existing applications. These test cases would then become part of the regression pack. Thus, over a period of time the regression test pack has ended up with so many test cases, that to test all of them manually in the short time frames became virtually impossible.
The onsite team sent across the manual regression test cases, which were used for automation. Appropriate coding standards were used to code the automation scripts. The scripts underwent reviews by the offshore and onsite team members. The scripts thus developed were tested and sent back to onsite for some static testing and validations. Caritor’s test team then maintains the automated regression test scripts on an ongoing basis.
Some notable challenges faced during the automation project were:
· Since many of the applications were deployed on UNIX servers, they had to be tested from Windows NT client machines
· One of the applications required frequent switching between UNIX back-end applications and JAVA front-end. Caritor used semi-keyword approach to meet this requirement where in functions were specified in the data tables. Automated regression test scripts have been created and are being maintained by Caritor’s test team on an ongoing basis.
Automated regression test scripts have been created and are being maintained by Caritor’s test team on an ongoing basis.
How Caritor Helped
Caritor proposed a solution, which involved automating the execution of monthly release testing. The reasons for choosing an automation approach included:
· Reduced execution time
· Reduced number of resources needed (over a period of time)
· Ability to execute the automated scripts in various environments
· Repetition of the tests for the unchanged functionality without errors
· Easy identification of GUI changes
· Unified Report for the test
Caritor leveraged an onsite offshore model for this initiative with 75% of the test team located offshore. The tools being used were Mercury Test Director and Winrunner. The test requirements were specified in Test Director and Caritor’s test team interfaced with the Test Director and Winrunner to obtain and automate the test requirements.
As part of the “automation of monthly release testing”, Caritor’s test team took on the responsibility to create, maintain and run the automated test scripts in tandem with the manual test scripts.
Client Benefits
The client derived several benefits as a result of this initiative:
· Reduced Time to Market – Automation resulted in reducing the time taken to complete the monthly release testing, while bringing new features and services to market
· Increased Effectiveness of Testing – Reduction in errors caused reduced defect slippage to production
· Accessing technically skilled offshore resources enabled the client derive significant costs benefits
· Automation will complement the testing that will be performed as part of the manual test suite
Top
Events & Conferences:
Software Testing Analysis & Review Conference
Date: 29th September to 3rd October 2008
Place: Anaheim, CA • Disneyland® Hotel
The Top Ten Reasons to Attend STARWEST
1. Over 100 learning sessions: tutorials, keynotes, conference sessions, bonus sessions, and more
2. In-depth tutorials, half- and full-day options—double the number of classes from last year
3. Cutting-edge testing answers from top testing experts
4. Presentations from highly experienced testing professionals
5. Networking opportunities with your peers in the industry
6. Special events—welcome reception, bookstore, meet the speakers, and more
7. The largest testing EXPO anywhere
8. Group discounts—bring your whole team
9. The perfect balance of learning and fun in Southern California
10. All this at the happiest place on Earth—Disneyland® Hotel!
Tuesday, August 12, 2008
Regression tests
Q- How should I write a regression testing section within the test plan?
A- According to the Regression Testing course on TestingEducation.org, regression testing is a common way to manage the risks of change. We might do regression testing by repeating the exact same test as before, or we might reuse the prior test idea, using different data and different secondary conditions as varying items across different uses of the test. The course covers why we might want to regression test, covers some common methods for regression testing, and provides an extensive list of readings for more information on the topic.
When writing the regression testing section of a formal test plan, there are several things you might want to consider:
What's the goal of the testing? This will help you understand what types of risks you'll address and how much coverage you'll need. Try to define this as clearly as you possibly can. I find that confusion around the goal of what the regression tests are suppose to accomplish is largest reason why regression testing spirals out of control, becoming expensive and ineffective.
What risks will our regression testing address, and what risks won't it address? Based on the goal of your testing, what specific risks will be addressed and which won't?
What kind of coverage do we want from our regression testing? I find that coverage is one of the hardest things to keep track of in my regression testing. Often, one test may cover multiple risks or multiple areas of the application. Understanding coverage is important in communicating what your regression tests are and are not doing to other project stakeholders.
What techniques will we employ to execute and maintain the tests? Understanding execution will be important. What tests will be manual and what tests automated? If they are automated, what tools will we need? What infrastructure? How will we maintain them over time? If tests are manual, who will do the testing? Using what techniques? What skills will the manual testers need to know and what areas of the application?
What environment(s) will we need to execute the tests? Here you look at what environments you'll need and when you think you'll need them. What data will need to be available? Are there any custom configurations that will need to be deployed? Will the same tests need to be executed against different configurations? How will you manage that?
How will we report the status of the testing? Who is the audience for your status? What level of detail do they want? Which information do they want first? Are some tests more important then others? What obstacles do you have to your regression testing?
If your test plan section addresses those items in some way, I think you're ok. The specifics of the format don't really matter if you have the right content there. Just format the information using the same style and tone as the template you're using. If you don't use a template, just structure the information in a way that makes sense for your context.
By Mike Kelly
Wednesday, August 6, 2008
Software testing deliverables
There are core sets of test deliverables that are required for any software testing phase: test plan, test case, defect documentation and status report. When taken together this set of deliverables takes the testing team from planning to testing and on through defect remediation and status reporting. This does not represent a definitive set of test deliverables, but it will help any test organization begin the process of determining an appropriate set of deliverables.
One common misconception is that these must be presented as a set of documents, but there are toolsets and applications available that capture the content and intent of these deliverables without creating a document or set of documents. The goal is to capture the required content in a useful and consistent framework as concisely as possible.
Test planAt a minimum the test plan presents the test: objectives, scope, approach, assumptions, dependencies, risks and schedule for the appropriate test phase or phases. Many test organizations will use the test plan to describe the software testing phases, testing techniques, testing methods and other general aspects of any testing effort. General information around the practice of testing should be kept in a "Best Practices" repository -- testing standards. This avoids redundant and conflicting information from being presented to the reader and keeps the test plan focused on the task at hand –- planning the testing effort. (See "The role of a software test manager".)
Objectives -- mission statementThe objective of the current testing effort needs to be clearly stated and understood by the software testing team and any other organization involved in the deployment. This should not be a sweeping statement on testing the "whole application" -- unless that is actually the goal. Instead the primary testing objectives should relate to the purpose of the current release. If this were a point-of-sale system and the purpose of the current release was to provide enhanced online reporting functionality, then the objective/mission statement could be this:
"To ensure the enhanced online reporting functionality performs to specification and to verify any existing functionality deemed to be in scope."
The test objective describes the "why" of the testing effort. The details of the "what" will be described in the scope portion of the test plan. Once again, any general testing objectives should be documented in the "Best Practices" repository. General or common objectives for any testing effort could include expanding the test case regression suite, documenting new requirements, automating test cases, and updating existing test cases.
In scopeThe components of the system to be tested (hardware, software, middleware, etc.) need to be clearly defined as being "in scope." This can take the form of an itemized list of those "in scope": requirements, functional areas, systems, business functions or any aspect of the system that clearly delineates the scope to the testing organization and any other organization involved in the deployment. The "What is to be tested?" question should be answered by the in scope portion of the test plan -- the aspects of the system that will be covered by the current testing effort.
Out of scopeThe components of the system that will not be tested also need to be clearly defined as being "out of scope." This does not mean that these system components will not be executed or exercised; it just means that test cases will not be included that specifically test these system components. The "What is NOT to be tested?" question should be answered by the out of scope portion of the test plan. Often neglected, this part of the test plan begins to deal with the risk-based scheduling that all test organizations must address -- What parts of the system can I afford not to test? The testing approach section of the test plan should address that question.
ApproachThis section defines the testing activities that will be applied against the application for the current testing phase. This addresses how testing will be accomplished against the in scope aspects of the system and any mitigating factors that may reduce the risk of leaving aspects of the system out of scope.
The approach should be viewed as a to-do list that will be fully detailed in the test schedule. The approach should clearly state which aspects of the system are to be tested and how: backup and recovery testing, compatibility/conversion testing, destructive testing, environment testing, interface testing, parallel testing, procedural resting, regression testing, application security testing, storage testing, stress and performance testing, and any other testing approach that is applicable to the current testing effort. The reasoning for using any given set of approaches should be described, usually from the perspective of risk.
AssumptionsAssumptions are facts, statements and/or expectations of other teams that the test team believes to be true. Assumptions specific to each testing phase should be documented. These are the assumptions upon which the test approach was based. Listed assumptions are also risks should they be incorrect. If any of the assumptions prove not to be true, there may be a negative impact on the testing activities. In any environment there is a common set of assumptions that apply to any given release. These common assumptions should be documented in the "Best Practices" repository; only assumptions unique to the current testing effort and perhaps those common assumptions critical to the current situation should be documented.
DependenciesDependencies are events or milestones that must be completed in order to proceed within any given testing activity. These are the dependencies that will be presented in the test schedule. In this section the events or milestones that are deemed critical to the testing effort should be listed and any potential impact or risks to the testing schedule itemized.
RisksRisks are factors that could negatively impact the testing effort. An itemized list of risks should be drawn up and their potential impact on the testing effort described. Risks that have been itemized in the project plan need not be repeated here unless the impact to the testing effort has not already been clearly stated.
Schedule The test schedule defines when and by whom testing activities will be performed. The information gathered for the body of the test plan is used here in combination with the available resource pool to determine the test schedule. Experience from previous testing efforts along with a detailed understanding of the current testing goals will help make the test schedule as accurate as possible. There are several planning and scheduling tools available that make the plan easier to construct and maintain.
Test caseTest cases are the formal implementation of a test case design. The goal of any given test case or set of test cases is to detect defects in the system being tested. A test case should be documented in a manner that is useful for the current test cycle and any future test cycles. At a bare minimum, each test case should contain the author, name, description, step, expected results and status.
Test case nameThe name or title should contain the essence of the test case, including the functional area and purpose of the test. Using a common naming convention that groups test cases encourages reuse and helps prevents duplicate test cases from occurring.
Test case descriptionThe description should clearly state the sequence of business events to be exercised by the test case. The test case description can apply to one or more test cases; it will often take more than one test case to fully test an area of the application.
Test case stepEach test case step should clearly state the navigation, data and events required to accomplish the step. Using a common descriptive approach encourages conformity and reuse. Keywords offer one of the most effective approaches to test case design and can be applied to both manual and automated test cases.
Expected resultsThe expected results are the expected behavior of the system after any test case step that requires verification or validation. This could include screen pop-ups, data updates, display changes or any other discernable event or transaction on the system that is expected to occur when the test case step is executed.
StatusThis is the operational status of the test case. Is it ready to be executed?
Documenting defectsThe primary purpose of testing is to detect defects in the application before it is released into production. Furthermore, defects are arguably the only product the testing team produces that is seen by the project team. Document defects in a manner that is useful in the defect remediation process. At a bare minimum, each defect should contain the author, name, description, severity, impacted area and status.
Defect nameThe name or title should contain the essence of the defect, including the functional area and nature of the defect.
Defect descriptionThe description should clearly state what sequence of events leads to the defect. When possible include a screenshot or printout of the error.
How to replicateThe defect description should provide sufficient detail for the triage team and the developer fixing the defect to duplicate the defect.
Defect severityThe severity assigned to a defect is dependent on the phase of testing, impact of the defect on the testing effort, and the risk the defect would present to the business if the defect was rolled-out into production.
Impacted areaThe Impacted area can be referenced by functional component or functional area of the system. Often both are used.
Status reportA test organization and members of the testing team will be called upon to create status reports on a daily, weekly, monthly and project basis. The content of any status report should remain focused on the testing objective, scope and scheduled milestones currently being addressed. It is useful to state each of these at the beginning of each status report and then publish the achievements or goals accomplished during the current reporting period, as well as those that will be accomplished during the next reporting period.
Any known risks that will directly impact the testing effort need to be itemized here, especially any "showstoppers" that will prevent any further testing of one or more aspects of the system.
Reporting periodThis is the period covered in the current status report. Include references to any previous status reports that should be reviewed.
Mission statementThe objective of the current testing effort needs to be clearly stated and understood by the testing team and any other organization involved in the deployment.
Current scopeThe components of the system being tested (hardware, software, middleware, etc.) need to be clearly defined as being "in scope," and any related components that are not being tested need to be clearly itemized as "out of scope."
Schedule milestonesAny schedule milestones being worked on during the current reporting period need to be listed and their current status clearly stated. Milestones that were scheduled but not addressed during the current reporting period need to be raised as risks.
RisksRisks are factors that could negatively impact the current testing effort. An itemized list of risks that are currently impacting the testing effort should be drawn up and their impact on the testing effort described.
By: David W. Johnson
Integartion Testing
Expert’s response: Ironically, integration testing means completely different things to completely different companies. At Microsoft, we typically referred to integration testing as the testing that occurs at the end of a milestone and that "stabilizes" a product. Features from the new milestone are integration-tested with features from previous milestones. At Circuit City, however, we referred to integration testing as the testing done just after a developer checks in -- it's the stabilization testing that occurs when two developers check in code. I would call this feature testing, frankly…
But to answer your question, top-down vs. bottom-up testing is simply the way you look at things. Bottom-up testing is the testing of code that could almost be considered an extension of unit testing. It's very much focused on the feature being implemented and that feature's outbound dependencies, meaning how that feature impacts other areas of the product/project.
Top-down, on the other hand, is testing from a more systemic point of view. It's testing an overall product after a new feature is introduced and verifying that the features it interacts with are stable and that it "plays well"' with other features.
The key to testing here is that you are in the process of moving beyond the component level and testing as a system. Frankly, neither approach alone is sufficient. You need to test the parts with the perspective of the whole. One part of this testing is seeing how the system as a whole responds to the data (or states) generated by the new component. You want to verify that data being pushed out by the component are not only well-formatted (what you tested during component testing) but that other components are expecting and can handle that well-formatted data. You also need to validate that the data originating within the existing system are handled properly by the new component.
Real-world examples? Well, let's assume you are developing a large retail management system, and an inventory control component is ready for integration. Bottom-up testing would imply that you set up a fair amount of equivalence-classed data in the new component and introduced that new data into the system as a whole. How does the system respond? Are the inventory amounts updated correctly? If you have inventory-level triggers (e.g., if the total count of pink iPod Nanos falls below a certain threshold, generate an electronic order for more), does the order management system respond accordingly? This is bottom-up testing.
At the same time, you want to track how well the component consumes data from the rest of the system. Is it handling inventory changes coming in from the Web site? Does it integrate properly with the returns system? When an item's status is updated by the warehouse system, is it reflected in the new component?
We see constant change in the testing profession, with new methodologies being proposed all the time. This is good -- it's all part of moving from art to craft to science. But just as with anything else, we can't turn all of our testing to one methodology because one size doesn't fit all. Bottom-up and top-down testing are both critical components of an integration testing plan and both need considerable focus if the QA organization wants to maximize software quality.
By -John Overbaugh
Friday, August 1, 2008
Performance testing in a virtual environment
Software testing in a virtual environment
Q- What is the likelihood of capturing accurate load testing results in a virtual test environment? We use LoadRunner/PerformanceCenter for performance testing. Our company is in the process of making use of virtualization. It seems this may be ideal for functional test environments, but not for performance test environment. What is your opinion?
Expert’s response: There are a lot of ways to use virtual environments in your performance testing, so there's no easy answer to this question. I'm assuming that you're referring to hosting the entire application in a virtual environment and running your performance testing against that platform. My answer is that, as always, it depends.
Some research on the topic has found that virtual environments don't scale as well as non-virtual environments. In a study by BlueLock, a company that provides IT infrastructure as a service, they found that "the number of simultaneous users that could be handled by the virtualized server was 14% lower than the number of simultaneous users being handled by the traditional server configuration."
This is consistent with my experience testing financial service applications in virtual environments. He points out some other common challenges with working in a virtual environment.
If you don't have much choice, or if you have a lot of pressure to make it work, I would recommend that you perform a comparison performance test to prove out the new platform. If you can do that successfully, you'll have some confidence that the platform is comparable. But just be aware that over time, as the application changes and the server configurations change (both virtual and the physical servers in production) your comparison will become outdated. It may happen faster than you might think.
As Scott points out in his talk, the problem isn't necessarily virtualization. It's that we don't always pay attention to all the other factors that affect performance. Differences in software and hardware configurations, network devices, geographic location, firewalls and other security measures, and a host of other factors all affect performance. Virtual environments often only make it more complex to track everything since they introduce their own overhead, rely on different network devices, and can reside in different physical locations.
It's not all doom and gloom. You might be able to virtualize some parts of your application quite successfully. For example, at the April Indianapolis Workshop on Software Testing, Ken Ahrens from iTKO shared an experience where he used the iTKO LISA product to enable service-oriented virtualization -- a process where you virtualize services that your application might rely on. In this case, if you were performance testing the core application and not the services it relies on, then that virtualization wouldn't necessarily affect your performance testing at all.
In that specific case study, before virtualization Ken's customer was unable to run performance tests on a regular basis due to service availability. Testing was an "event" where tens of teams had to get together to make a load test happen, and the high cost meant that this only happened a few times a year. Since they virtualized some of those services, they can run tests daily. Virtualization at the message level also gave them a greater ability to experiment with issues such as "what if this key service slows down significantly?" Or to try different data scenarios, such as "what if the lookup returns 900 records instead of 10 records?"
By Mike Kelly