9 Tips about Outsourcing Your Software Testing

Not ready to trust a service provider with the full lifecycle of software development? Maybe you'd be more comfortable trying outsourcing with a discrete but vital aspect of the work — like testing. Vigilant Technologies, in Norwood, MA, specializes in quality assurance testing, automation testing, regression testing, performance testing, and production monitoring. In this article, Jon Land, president and general manager, and Matt Hooper, chief technology officer provide nine tips on what you can expect when you go the outsourcing route with your software testing work.

–> TIP #1: Don't expect that your internal people are going to be happy to see you bringing in an outside service provider to do testing of their systems. But do expect those attitudes to turn around if the service provider is doing its job right.

Mr. Hooper and Mr. Land say they're used to going into situations where people don't want them there. How do they counter the negative feelings about their presence?

"We take a very educational approach," said Mr. Hooper. That means letting internal folks know that these outsiders have been in their shoes, that they've had the same kinds of pressures from management that the internal people are now feeling — and that they're not there to replace anybody on staff. It also means getting across the idea that they have some techniques they'll gladly share. It also means training internal staff on how to use specific testing tools through hands-on lessons in their own environments.

–> TIP #2: The right service provider can act as a conduit to offshore expertise (and savings).

"We go in and set up and build and have the client relationship," said Mr. Land. "We understand from a business perspective what [the client] wants to achieve. We understand the technologies and specific requirements. Then we develop the plans and the foundation and set up the architecture and approach to delivering that."

As Vigilant defines the specific components of work (such as testing scripts) that need to be created for the project, it "packages" those up and ships them to offshore partners to develop. Along with that, Vigilant creates a "remote access scenario" to give everybody access to the same systems and controls "to ensure that we are acting as a virtual team and that the management has a view into it."

So how does Vigilant decide when to offshore work? Typically, it happens in "larger engagements," according to Mr. Land. "If there is a scope of work that we can deliver within a month [involving] a couple of people, I am not going to bother going through the expense of starting up an offshore arrangement."

He cites one engagement where there were so many tasks and automation scripts that had to be developed, he arranged to have a handful of offshore people for three months to do that coding, with his company managing the work.

Offshoring doesn't happen where there are specific regulations or security concerns involved. Likewise, if there's a high level of application training that needs to take place for all those working on the code, that can be a barrier to offshoring the work as well. Mr. Land said his company evaluates issues such as, "What does the application do? How does it function? How do you work with data? What is the type of validations?" From there, they can decide whether the project is extensive enough to warrant the investment of getting an offshore staff up and running.

–> TIP #3: If you're outsourcing code development, don't engage the same service provider that's writing the code to do the testing of the code.

That's a common mistake companies make, said Mr. Hooper. The problem, as he sees it, is that the moment you have the same company do the quality testing, suddenly, you have "instituted a political arrangement. It's no longer about the quality of the code. It's no longer about the requirements. It's all about the positioning and management so that someone doesn't lose their job."

In other words, if a service provider has written the code, then does testing on it, what are they going to do with the results, which may show how buggy it is? If they bring a list of bugs back to the client, the client will probably say, "Hey, you wrote the code — don't you know what you're doing?"

"It's the fox watching the hen house," said Mr. Hooper.

So quality assurance work requires objectivity. That means bringing in an independent entity for at least some aspects of it.

–> TIP #4: Your testing plans should be declared right up front, along with the application specifications — and that's something that should be done internally.

Most companies don't do this, according to Mr. Hooper. "They develop the application first, based on what the user wants, and then they do testing on what has been developed." This creates an environment where it's hard to quantify what the problem is without pointing fingers or being blamed.

The business intelligence side of the testing equation needs to be supplied by internal people, whether it's business analysts in the QA group or some other role. "That really is the company's intellectual property, something of value that needs to be augmented and developed," said Mr. Land.

So, first and foremost, the coaching provided by Vigilant is targeted at the QA team, to show them how to develop a testing suite upfront for functional testing, before the actual coding of the application has really begun. A service provider can coach testers on what tools to use, how to automate the process, how to make it repeatable and how to obtain better output for analysis.

But the coaching is also directed at the developers doing the coding and perhaps doing unit testing. Vigilant's analysis might show how objects are making the wrong call to a back end system and loading in too many connections for the network that will support it in the actual business environment (vs. the high speed network the developers are working on).

Or, more seriously, it could reveal coding practices that don't make for a secure computing environment. "At almost every engagement, we expose some level of risk that [the company was] unaware of," said Mr. Hooper. He goes so far as to call most companies apathetic about security. "I just don't think enough companies have been sued or enough companies have lost money for them to get the fear factor into them."

The goal of testing isn't to lay blame, but to quantify the results of the testing to the client environment and educate the programmers so they can go into the next phase of the application lifecycle with checks and balances.

–> TIP #5: Multi-vendor scenarios require tight service level agreements and management — and sometimes just a measure of diplomacy on the part of the individuals involved.

Setting expectations with providers and clients through SLAs helps define roles and responsibilities.

But even then ill-will can crop up. Mr. Hooper cited one situation in which performance testing at the client site for a newly purchased application revealed sluggish performance. The application vendor began to get defensive and nervous. After all, if the system was slow, it was his fault, right?

The Vigilant consultant said something to the effect of, "Boy, we have to make sure we document things really well so that no one gets blamed here." That simple statement knocked down the barriers between the two vendor reps so they could work together to uncover the real problems with performance, which may have been with the application (in which case the application vendor was going to need to help in diagnosing the situation), but could also have lay with the infrastructure of the client site.

–> TIP #6: If your systems aren't well documented, expect the service provider to require more time to get up to speed. This means you'll pay higher fees.

When Vigilant evaluates a new project, they look at the complexity of the application, the amount of documentation that may be created for the application and the amount of training necessary. "Some environments are structured and well created and well supported, so you can pretty much go through their internal training to understand what is going on — and that can be done in whatever the timeframe is," said Mr. Hooper. "Others [require] a lot of hunting and pecking around in the application, of working with the users to fully understand the nature of the business, the transaction, the functions, etc."

The kinds of questions Vigilant asks in phase one of a new testing project are: What is the goal of this application and supporting business requirements? What are the business goals that need to be met? What are the requirements that need to be achieved in order to declare that an application is ready for production?

Phase two asks, what is the actual functionality that needs to be tested to achieve these results? What is the testing environment? What data is necessary? How are we going to validate the results? What is required for management and reporting purposes? What is the team going to look like? Who is involved from the client side? Who is going to be involved during the process from the service provider side? Who is going to be involved after the process? Who has to be trained along the way to support this environment after we're done with our effort?

In the third phase, the company does an inventory of technologies required, development of the scripts, running and validating their success and documentation.

To obtain this kind of information, Vigilant sends out questionnaires to its clients before the engagement begins. That lets them understand the project better beforehand and also tells them how well defined the project is within the organization. (Vigilant is making one of its questionnaires available to Sourcingmag.com readers. You'll find the link to that at the end of this article.)

–> TIP #7: Forget about RFPs. Use a rapid assessment process to evaluate service providers.

Vigilant does what it can to avoid the RFP process, because, as Mr. Land says, "We're not a real fan of just responding to blind RFPs."

Through its questionnaires, Vigilant obtains a good measure of intelligence about the project. From there it holds meetings by phone or in person with the client for additional elucidation.

By the time those discussions have finished, the client should have a fairly solid idea about its testing framework and what needs to be done.

–> TIP #8. The testing portion of the project could soak up 1-2% of the total budget, and it could be 10-20% of the budget.

It all depends, said Mr. Land, on the nature of the application, the functionality required, and the security measures that need to be taken.

Time wise, it can take anywhere from a couple of weeks to three months. When the client appears unsure about the project, Vigilant works on a time and materials basis. When it's a highly structured project with tightly defined requirements, Vigilant will work on a fixed cost basis.

–> TIP #9: Don't baseline before you know what the important metrics are to your company.

Baselining can apply to many different operations within the organization — system response time, system uptime, application response time, database capacity, server capacity. But defining what metrics are most important to the running of that business will help define the baselining where it needs to happen.

Sound obvious? Even the biggest clients and service providers can botch this one. Mr. Hooper believes that's where IBM might have gone wrong in its outsourcing agreement with JPMorgan Chase, which canceled the contract earlier this year, at, presumably, great expense. "IBM knows what they're doing. But clearly the reason that it failed is that someone did not say, ÔThis means success,' and IBM wasn't managing to that. I know that IBM has the capabilities of doing [the work], but somewhere down the line, it fell through."

Useful Links:

Vigilant Technologies

Vigilant Pre-deployment Questionnaire (PDF file)
/docs/free/Vigilant Pre-deployment Questionnaire.pdf