That is the question. Automated testing is a topic of interest for many organizations. The promised benefits of automated testing – reduced costs and greater scalability, to name a few – are enticing. But, do you have the information you need to decide whether automated testing is a fit for your organization? More importantly, are you asking the right questions?
Within this discussion, we will draw on our observations as a systems integrator working in the trenches with customers across the globe. Whether we are helping customers implement Oracle systems in Latin America, Asia-Pac, or some other part of the world, within this context, time is a scarce commodity and accuracy is of the utmost importance; discovering unexpected issues with a corporate system late in a project is a scenario that no one wants to contemplate. We also draw on our experience as a provider of private cloud and managed services. Many of our cloud and managed services customers have 24×7 business operations, driven by a distributed user base spanning the globe. IT Convergence (ITC) is managing and supporting their systems, including both OOB functionality and customizations/extensions that customers have built over time to meet their unique business requirements. To provide the stability and performance these customers demand, ITC must carefully consider the approach to testing routine patches as well as enhancements. With this background in mind, we propose that an evaluation of automated testing should, at a minimum, touch on the following key considerations:
1) What’s at stake?
2) The present state of affairs
4) Organizational readiness
1) What’s at Stake?
The first aspect to consider is the scale of the problem, in other words, if a corporate system experiences an outage or other serious service disruption, how big of an issue will that be? The following are some scenarios that necessitate development of applications that can operate at exceptionally high levels of performance and stability.
Business applications which are critical to operations such as e-commerce or reservation systems for the services industry are generally worthwhile targets for test automation. Any type of activity where revenue generation and the company’s ability to service customers are directly affected can be considered critical. Some examples that come to mind would be a national coffee chain’s POS system having a very bad Friday, or a mobile commerce platform having a meltdown that would have been considered a catastrophe of epic proportions were it not for the impacted company absolutely gushing cash at the time. Here is an example from the airlines industry – although this particular disruption appears to have been caused by a networking issue, I think we can all agree that a) this is not the type of system that you want to have go down unexpectedly and b) when this does happen to an airline system and your travels suffer as a result, it is inevitably on an occasion when you have someplace that you need to be in a hurry. Hence the reason you paid hundreds of dollars to fly on an airplane.
Businesses that are required to meet standards per regulatory requirements such as pharmaceutical manufacturing or telecom billing tend to see the value in test automation. As anyone who has worked in the pharmaceutical industry is aware, adherence to Good Manufacturing Practices is a cost of doing business. Because pharmaceutical products (at least those that are approved by the FDA) will eventually make their way into humans across the globe, Quality, with a capital Q, is the name of the game. Validation is the broad scope of activities that is required to ensure that the output of manufacturing a) is very consistent in producing products to the specifications used in the process design and b) that the process is designed in such a way that the output meets the objectives the process was intended to satisfy. Financial services is another regulated industry (although considerable disagreement exists on whether the finance industry is too heavily or too lightly regulated, let’s save that debate for another day). In terms of paying the price for a faulty IT system, it’s hard to imagine a worse scenario than RBS being fined the equivalent of 87 million dollars for system disruptions.
Systems that are subject to high loads or that have very variable loads, which are frequently also systems that support many users, can be good candidates for test automation. Although back office accounting systems may seem unlikely to generate much excitement under most circumstances, during month-end close these systems can be subject to major spikes in utilization as finance and accounting teams, labor to close the books and roll up financial transactions from across the enterprise, all the while repeatedly running reports and queries against the same system. When it comes to a sudden spike in transaction load causing highly visible failure, the classic example that comes to mind is the failed launch of healthcare.gov. Although the subsequent investigations seem to point to management challenges surrounding the project, one can’t help but wonder – had a thorough functional and load testing of the site been performed prior to go live would it really have been given the green light?
So, if your system going down will plunge the majority of North America into darkness or shut down a stock exchange, automated testing might be something to consider. However, if you are trying to build your ROI case for test automation around ensuring that your office’s internal ridesharing system is rock solid, you may want to reconsider whether this is truly the road to automated testing greatness.
2) The Present State of Affairs
A second key consideration that should come into play is the state of your current testing solution. Are your testers business users? Are they a part of your IT team? In the first scenario, a business user that is highly skilled performs the testing. In a large number or organizations, this is the status quo, after all, who knows how the system should function better than the business user? Also, isn’t the reason that we do user acceptance testing to make sure that the users can’t complain to IT later that the system doesn’t perform as expected?
While this seems like a workable idea at the first glance, it has a few major downfalls. First of all, the users tend to be highly paid employees. Does your project management approach or your estimate of the system’s total cost of ownership assign a hard dollar number to the cost of the users’ time that is consumed through project test cycles and ongoing regression testing? If so, pat yourself on the back, you are in a very small minority. Another issue that calls into the question the practicality of relying on users for testing is that while they might be perfectly capable of informing you that a particular feature or function doesn’t work as required, it is unlikely that users will provide thorough documentation comparing what the system actually did versus the desired behavior and even less likely that they will be capable of following up with an analysis of the underlying problem, which is, of course, the next step towards getting the bug fixed.
One last downfall of this approach, which the users have no doubt told you about (perhaps more than once) is that being dragged into IT testing activities takes users away from their day to day job activities. One might actually get the impression that they don’t enjoy serving as testers for IT! The nerve, after all we’ve done for them!
Costly employees burning up time performing system testing might be an indication that test automation could provide a lot of value. But on the other hand, manual testing can be part of a valid IT management strategy. Using internal resources to perform manual system testing may be a deliberate training strategy for growing development resources in-house. This approach is attractive since it kills two birds with one stone by taking care of testing work that needs to get done at a low cost while building a pipeline of development talent. To be successful with this strategy, support from manager will be required. Throwing a large number of junior resources at a set of testing requirements and stepping away is almost guaranteed to generate confusion and a spectacularly low level of productivity. Oversight is key.
Another reason to continue with manual testing performed by internal resources would be cross-training. With proper documentation, having in house business analysts perform testing outside of their normal functional responsibilities could mitigate the risk of valuable system and process knowledge being too concentrated in any given employee. Besides the risk of employees leaving for greener pastures, having a good team cross-training strategy provides much more flexibility for staffing both routine and exceptional business activities.
Stayed tuned for part 2, where we will go over the Budget and Organizational Readiness considerations of automating or not automating your testing.