This post discusses the use of test data when developing automated tests based on the testability features that were released with Microsoft Dynamics NAV 2009 SP1. The practices outlined here have been applied during the development of the tests included in the Application Test Toolset for Microsoft Dynamics NAV 2009 SP1.
Overall, the execution of automated tests proceeds according to a four-phased pattern: setup, exercise, verify, teardown. The setup phase is used to place the system into a state in which it can exhibit the behavior that is the target of a test. For data-intensive systems such as Dynamics NAV, the test data is an important part of setting system state. To test the award of discounts, for instance, the setup phase may include the creation of a customer and setting up a discount.
One of the biggest challenges of testing software systems is to decide what test data to use and how and when to create it. In software testing the term fixture is used to describe the state in which the system under test must be to expect a particular outcome when exercised. The fixture is created in the Setup phase. In the case of an NAV application that state is mostly determined by the values of all fields in all records in the database. Essentially, there are two options for how to create a test fixture:
The advantage of using a prebuild test fixture is that most data required to start executing test scenarios is present. In the NAV test team at Microsoft, for instance, much of the test automation is executed against the same demonstration data that is installed when the Install Demo option is selected in the product installer (i.e., CRONUS). That data is reloaded when necessary as tests execute to ensure a consistent starting state.
In practice, a hybrid approach is often used: a common set of test data is loaded before each test executes and each test also creates additional data specific to its particular purpose.
To reset the common set of test data (the default fixture), one can either execute code that (re)creates that data or restore an earlier created backup of that default fixture. The Application Test Toolset contains a codeunit named Backup Management. This codeunit implements a backup-restore mechanism at the application level. It may be used to backup and restore individual tables, sets of tables, or an entire company. Table 1 lists some of the function triggers available in the Backup Management codeunit. The DefaultFixture function trigger is particularly useful for recreating a fixture.
Table 1 Backup Management
When executed the first time, a special backup will be created of all the records in the current company. Any subsequent time it is executed, that backup will be restored in the current company.
Creates a special backup of all tables included in the filter.
Restores the special backup that was created earlier with BackupSharedFixture.
Creates a named backup of the current company.
Restores the named backup in the current company.
Creates a backup of a table in a named backup.
RestoreTable(name, table id)
Restores a table from a named backup.
There are both maintainability and performance perspectives on the creation of fixtures. First there is the code that creates the fixture. When multiple tests use the same fixture it makes sense to prevent code duplication and share that code by modularizing it in separate (creation) functions and codeunits.
From a performance perspective there is the time required to create a fixture. For a large fixture this time is likely to be a significant part of the total execution time of a test. In these cases it could be considered to not only share the code that creates the fixture but also the fixture itself (i.e., the instance) by running multiple tests without restoring the default fixture. A shared fixture introduces the risk of dependencies between tests. This may result in hard to debug test failures. This problem can be mitigated by applying test patterns that minimize data sensitivity.
A test fixture strategy should clarify how much data is included in the default fixture and how often it is restored. Deciding how often to restore the fixture is a balance between performance and reliability of tests. In the NAV test team we've tried the following strategies for restoring the fixture for each
The fixture can be reset from within each test function or, when a runner is used, from the OnBeforeTestRun trigger. The method of frequent fixture resets overcomes the problem of interdependent tests, but is really only suitable for very small test suites or when the default fixture can be restored very quickly:
// test code
...An alternative strategy is to recreate the fixture only once per test codeunit. The obvious advantage is the reduced execution time. The risk of interdependent tests is limited to the boundaries of a single test codeunit. As long as test codeunits do not contain too many test functions and they are owned by a single tester this should not give too many problems. This strategy may be implemented by the test runner, the test codeunit's OnRun trigger, or using the Lazy Setup pattern.
With the Lazy Setup pattern, an initialization function trigger is called from each test in a test codeunit. Only the first time it is executed will it restore the default fixture. As such, the fixture is only restored once per test codeunit. This pattern works even if not all tests in a test codeunit are executed or when they are executed in a different order. The Lazy Setup may be implemented like:
LOCAL PROCEDURE Initialize();
IF Initialized THEN
// additional fixture setup code
Initialized := TRUE;
// test code scenario A ...
// test code scenario B ... As the number of test codeunits grows the overhead of recreating the test fixture for each test codeunit may still get too large. For a large number of tests, resetting the fixture once per codeunit will only work when the tests are completely insensitive to any change in test data. For most tests this will not be the case.
The last strategy only recreates the test fixture when a test fails. To detect failures caused by (hidden) data sensitivities, the failing test is then rerun against the newly created fixture. As such, some type of false positives (i.e., test failures not caused by a product defect) can be avoided. To implement this strategy the test results need to be recorded in the OnAfterTestRun trigger of the test runner to a dedicated table. When the execution of a test codeunit is finished, the test results can be examined to determine whether a test failed and the test codeunit should be run again.
For the implementation of each of these fixture strategies it is important to consider any dependencies that are introduced between test codeunits and the test execution infrastructure (e.g., test runners). The consequence of implementing the setup of the default fixture in the test runner codeunit may be that it becomes more difficult to execute tests in other contexts. On the other hand, if test codeunits are completely self-contained it becomes really easy to import and execute them in other environments (e.g., at a customer's site).
Расскажите о новых и интересных блогах по Microsoft Dynamics, напишите личное сообщение администратору.
|Опции темы||Поиск в этой теме|