Friday, February 15, 2008

How Do You Tell QA That They Are Wasting Their Time?

Recently I've been struggling with inefficiencies in our process specifically at the handoff between development and quality assurance.  As our product grows, so does the demand that the product puts on quality assurance.  As the demands increase on quality assurance, the qa team demands more on the development team to make sure everything is communicated.  What developers could just change before, now has to be documented and communicated so that QA can get their coverage.

What's frustrating to me, is the fact that with all of this documentation and communication comes serious inefficiencies and waste.  Maybe it's just an issue with our process, or our state of mind, or our people.  I'm just hoping that there's someone out there with some thoughts, or insight.  I'll give an example.

For this latest release, we decided to stay on top of things, and upgrade to VS.NET 2008 and 3.5 framework.  So, the framework upgrade needed to be communicated to QA.  We needed to give them a spec on the work (which resulted in this post, trying to convince them that testing this wasn't all that important), and they had to dedicate resources to go through that document and deliver a test plan.

So, after the QA person went through the document, they started sending me emails about how to test this.  First, the question, "How would the 3.5 framework upgrade be tested?".  The answer to that is that it  may or may not throw an error, we may not use any 3.5 framework only features at this point.  Which was followed by a "Let’s just say, for the sake of argument that the framework was installed incorrectly.  Would there be a way for end users to notice it?"  This is where it starts to get frustrating to me.  So much waste, I want to scream "Just come over to my freaking desk and I'll show you the freaking framework in add/remove programs!".

What I'm really conflicted about is the fact that telling the QA team "don't worry about testing that", is like speaking Japanese to them.  They just don't understand those words.  And that's fine.  I think that's good in a way, but it's wasting my time.

Does anyone out there have any advice as to how to best communicate this.  To convince a QA team, who isn't super-technical, that a technical feature/improvement/fix, is safe, doesn't need to be tested?  Is there anything more I can do, or should I just grin and bear it?  I'd love your words of wisdom.  Thanks in advance.


Anonymous said...

To test is to mitigate risk that a change will not cause a problem in the system. Is it really that you don't want to mitigate this risk OR is it that the risk of a big, obvious issue is moderate(whole app throws an error) but the risk of a small minor error is low and extremely expensive to find? In other words, while mitigating risk is good, should you be trusting that your unit tests will mitigate this risk sufficiently? The scenerio is similarly to OS level patches and other dependancies. Some very basic coverage is good (system did not blow up), but detailed regression testing has a very, very high price-to-value ratio and isn't worth it.

Anonymous said...

I think the main issue here is that a clear distinction isn't drawn between functional QA (which can be handled by a non-technical person) and technical QA (which should be handled by a technical resource that performs stress testing, examines the likely boundary conditions of the application, etc.).

There is no reason that someone on the functional QA team should even really be aware that an OS-level or Framework-level change has been made. They should be testing things like the workflow of the application and ensuring the specifications on the business requirements have been met. If something doesn't work as specified, then report it as a bug.

The technical QA person should absolutely know about these things, though. Their success and failure conditions aren't things like "did the page load with the correct information that you just saved?" and "are the results ordering correctly when the search completes?" but rather "how did the server respond to that action?" and "what happens to the database resources when we have 3,000 users simultaneously call this web service?".

What if, for example, there was a change in the way SQL connection pooling is handled in .NET 3.5? The functional QA person wouldn't even notice such a thing (unless there was a notable performance implication, and maybe not even then), but a technical QA person could notice an excessive amount of connections to the db server. Or how about a change to the hash implementation of the System.Web.Caching internals? Again, the functional QA team wouldn't notice that. Without a technical QA person (or team) with the appropriate tools, you wouldn't notice something like that until the cache started producing bizarre results in a production environment.

More than anything, it is these "what if" scenarios that should be keeping a technical QA person awake at night. Hopefully they would be things that developers catch during unit and integration testing, but if we relied on developers to catch everything there would be no need for a QA team at all. Having an experienced technical QA person minding that particular store would mitigate those possibilities.

For example, I remember a few years ago there was an issue with a release of MSXML that changed how xml files were parsed. For performance reasons, Microsoft decided that it would be better to load the file in chunks of a certain size. If the file was less than a certain size, and a certain property was set on the object, nothing was loaded and an ambigious "ROOT element is missing" error was thrown. To a functional QA person, the application might have worked just fine (if the data saves, the data saves - and this was definitely dependent on the test data). A technical QA team would have known and researched the differences in the new release of MSXML before it was deployed to the servers and would have spotted that a boundary condition test was needed. As it was, it took us several days to even determine what had happened and develop a means of working around it. The clients weren't happy, the non-technical staff was frustrated, and the technical staff was diverted from their expected (but now delayed) workload.

Which actually brings up another issue altogether, which is deploying non-tested OS patches to a production server....

Tăng chiều cao said...