Guidelines for Implementing Database Testing Processes and Procedures
Of all the possible elements
that make up a testing strategy, there is really only one key to
success: consistency. Tests must be repeatable, and must be run the same
way every time, with only well-known (i.e., understood and documented)
variables changed. Inconsistency and not knowing what might have changed
between tests can mean that problems the tests identify will be
difficult to trace.
Development teams should
strive to build a suite of tests that are run at least once for every
release of the application, if not more often. These tests should be
automated and easy to run. Preferably, the suite of tests should be
modular enough that if a developer is working on one part of the
application, the subset of tests that apply to only that section can be
easily exercised in order to validate any changes.
Once you've built a set of
automated tests, you're one step away from a fully automatic testing
environment. Such an environment should retrieve the latest code from
the source control repository, run appropriate build scripts to compile a
working version of the application, and run through the entire test
suite. Many software development shops use this technique to run their
tests several times a day, throwing alerts almost instantly if problem
code is checked in. This kind of rigorous automated testing is called continuous integration,
and it's a great way to take some of the testing burden out of the
hands of developers while still making sure that all of the tests get
run as often (or even more often) than necessary. A great free tool to
help set up continuous integration in .NET environments is CriuseControl.NET, available at http://sourceforge.net/projects/ccnet.
|
Testers must also consider
the data backing the tests. It can often be beneficial to generate test
data sets that include every possible case the application is likely to
see. Such a set of data can guarantee consistency between test runs, as
it can be restored to its original state. It can also guarantee that
rare edge cases are tested that might otherwise not be seen.
It's also recommended that a
copy of actual production data (if available) be used for testing near
the end of any given test period. Oftentimes, generated sets can lack
the realism needed to bring to light obscure issues that only real users
can manage to bring out of an application.
Why Is Testing Important?
There are only two
reasons that software gets tested at all. First, testing is done to find
problems that need to be fixed. Second, testing is done to ensure that
no problems need to be fixed. It can be argued that the only purpose of
software is to be used by end users, and therefore, the only purpose of
testing is to make sure that end users don't encounter issues.
Eventually, all software
must be tested. If developers or a quality assurance team fail to fully
test an application, it will be tested by the end users trying to use
the software. Unfortunately, this is a great way to lose business; users
are generally not pleased with buggy software.
Testing by
development and quality assurance teams validates the software. Each
kind of testing that is done validates a specific piece of the puzzle,
and if a complete test suite is used (and the tests are passed), the
team can be fairly certain that the software has a minimal number of
bugs, performance defects, and other issues. Since the database is an
increasingly important component in most applications, testing the
database makes sense; if the database has problems, they will propagate
to the rest of the application.
What Kind of Testing Is Important?
From the perspective of a
database developer, only a few types of tests are really necessary most
of the time. Databases should be tested for the following issues:
Interface consistency should be validated, in order to guarantee that applications have a stable structure for data access.
Data availability and authorization tests
are similar to interface consistency tests, but more focused on who can
get data from the database than how the data should be retrieved.
Authentication tests
verify whether valid users can log in, and whether invalid users are
refused access. These kinds of tests are only important if the database
is being used for authenticating users.
Performance tests
are important for verifying that the user experience will be positive,
and that users will not have to wait longer than necessary for data.
Regression testing
covers every other type of test, but generally focuses on uncovering
issues that were previously fixed. A regression test is a test that
validates that a fix still works.
How Many Tests Are Needed?
Although most
development teams lack a sufficient number of tests to test the
application thoroughly, in some cases the opposite is true. Too many
tests can be just as much of a problem as not enough tests; writing
tests can be time consuming, and tests must be maintained along with the
rest of the software whenever functionality changes. It's important to
balance the need for thorough testing with the realities of time and
monetary constraints.
A good starting point for
database testing is to create one unit test per interface parameter
"class," or group of inputs. For example, consider the following stored
procedure interface:
CREATE PROCEDURE SearchProducts
SearchText VARCHAR(100) = NULL,
PriceLessThan DECIMAL = NULL,
ProductCategory INT = NULL
This stored procedure
returns data about products based on three parameters, each of which is
optional, based on the following (documented) criteria:
A user can search for text in the product's description.
A user can search for products where the price is less than a given input price.
A
user can combine a text search or price search with an additional
filter on a certain product category, so that only results from that
category are returned.
A user cannot search on both text and price simultaneously. This condition should return an error.
Any other combination of inputs should result in an error.
In order to validate the
stored procedure's interface, one unit test is necessary for each of
these conditions. The unit tests that pass in valid input arguments
should verify that the stored procedure returns a valid output result
set per its implied contract. The unit tests for the invalid
combinations of arguments should verify that an error occurs when these
combinations are used. Known errors are part of an interface's implied
contract.
In addition to these
unit tests, an additional regression test should be produced for each
known issue that has been fixed within the stored procedure, in order to
ensure that the procedure's functionality does not degenerate over
time.
Although this seems like a
massive number of tests, keep in mind that these tests can—and
should—share the same base code. The individual tests will have to do
nothing more than pass the correct parameters to a parameterized base
test.
Will Management Buy In?
It's unfortunate
that many management teams believe that testing is either an unnecessary
waste of time or not something that should be a well-integrated part of
the software development process at all. Many software shops,
especially smaller ones, have no quality assurance staff at all and such
compressed development schedules that little testing gets done, and
full functionality testing is nearly impossible.
Several companies I've
done work for have been in this situation, and it never results in the
time or money savings that management thinks it will. On the contrary,
time and money is actually wasted by lack of testing.
A test process
that is well integrated into development finds most bugs upfront, when
they are created, rather than later on. A developer who is currently
working on enhancing a given module has an in-depth understanding of the
code at that moment. As soon as he or she moves on to another module,
that knowledge will start to wane as focus goes to other parts of the
application. If defects are discovered and reported while the developer
is still in the trenches, the developer will not need to relearn the
code enough to fix the problem, thereby saving a lot of time. These time
savings translate directly into increased productivity, as developers
end up spending more time working on new features, and less on fixing
defects.
If management teams
refuse to listen to reason and allocate additional development time for
proper testing, try doing it anyway. Methodologies such as test-driven
development, in which you write tests against the routines before
writing the actual routines, then work until the tests pass, can greatly
enhance overall developer productivity. Adopting a testing
strategy—with or without management approval—can mean better, faster
output, which in the end will help to ensure success.