`
Test tools can be used to support one or more testing activities. Such tools include:
·
Tools that are
directly used in testing, such as test execution tools and test data
preparation tools
·
Tools that help
to manage requirements, test cases, test procedures, automated test scripts,
test results, test data, and defects, and for reporting and monitoring test
execution
·
Tools that are
used for investigation and evaluation
·
Any tool that
assists in testing (a spreadsheet is also a test tool in this meaning)
Test
tools can have one or more of the following purposes depending on the context:
·
Improve the
efficiency of test activities by automating repetitive tasks or tasks that
require significant resources when done manually (e.g., test execution,
regression testing)
·
Improve the
efficiency of test activities by supporting manual test activities throughout
the test process
·
Improve the
quality of test activities by allowing for more consistent testing and a higher
level of defect replication
·
Automate
activities that cannot be executed manually (e.g., large scale performance
testing)
·
Increase
reliability of testing (e.g., by automating large data comparisons or simulating
behaviour)
·
Tools can be
classified based on several criteria such as purpose, pricing, licensing model
(e.g. commercial or open source), and technology used. Tools are classified in
this syllabus according to the test activities that they support.
Some
tools clearly support only or mainly one activity; others may support more than
one activity, but are
classified
under the activity with which they are most associated. Tools from a single
provider, especially those that have been designed to work together, may be
provided as an integrated suite.
Some
types of test tools can be intrusive, which means that they may affect the
actual outcome of the test. For example, the actual response times for an
application may be different due to the extra instructions that are executed by
a performance testing tool, or the amount of code coverage achieved may be
distorted due to the use of a coverage tool. The consequence of using intrusive
tools is called the probe effect.
Some
tools offer support that is typically more appropriate for developers (e.g.,
tools that are used during component and integration testing). Such tools are
marked with “(D)” in the sections below.
Management
tools may apply to any test activities over the entire software development
lifecycle. Examples of tools that support management of testing and Testware
include:
·
Test management
tools and application lifecycle management tools (ALM)
·
Requirements
management tools (e.g., traceability to test objects)
·
Defect management
tools
·
Configuration
management tools
·
Continuous
integration tools (D)
Tool
support for static testing
Static
testing tools are associated with the activities and benefits described in
chapter 3. Examples of such tools include:
·
Tools that
support reviews
·
Static analysis
tools (D)
Tool
support for test design and implementation
Test
design tools aid in the creation of maintainable work products in test design
and implementation, including test cases, test procedures and test data.
Examples of such tools include:
·
Test design tools
·
Model-Based
testing tools
·
Test data
preparation tools
·
Acceptance test
driven development (ATDD) and behaviour driven development (BDD) tools
·
Test driven
development (TDD) tools (D)
In
some cases, tools that support test design and implementation may also support
test execution and
logging,
or provide their outputs directly to other tools that support test execution
and logging.
Tool
support for test execution and logging
Many
tools exist to support and enhance test execution and logging activities.
Examples of these tools
include:
·
Test execution
tools (e.g., to run regression tests)
·
Coverage tools
(e.g., requirements coverage, code coverage (D))
·
Test harnesses
(D)
·
Unit test
framework tools (D)
Tool
support for performance measurement and dynamic analysis
Performance
measurement and dynamic analysis tools are essential in supporting performance
and load
testing
activities, as these activities cannot effectively be done manually. Examples
of these tools include:
·
Performance
testing tools
·
Monitoring tools
·
Dynamic analysis
tools (D)
Tool
support for specialized testing needs
In
addition to tools that support the general test process, there are many other
tools that support more specific testing issues. Examples of these include
tools that focus on:
·
Data quality
assessment
·
Data conversion
and migration
·
Usability testing
·
Accessibility
testing
·
Localization
testing
·
Security testing
·
Portability
testing (e.g. testing software across multiple supported platforms)
Simply
acquiring a tool does not guarantee success. Each new tool introduced into an
organization will require effort to achieve real and lasting benefits. There
are potential benefits and opportunities with the use of tools in testing, but
there are also risks. This is particularly true of test execution tools (which
is often referred to as test automation). Potential benefits of using tools to
support test execution include:
·
Reduction in
repetitive manual work (e.g., running regression tests, environment set up/tear
down tasks, re-entering the same test data, and checking against coding
standards), thus saving time
·
Greater
consistency and repeatability (e.g., test data is created in a coherent manner,
tests are executed by a tool in the same order with the same frequency, and
tests are consistently derived from requirements)
·
More objective
assessment (e.g., static measures, coverage)
·
Easier access to
information about testing (e.g., statistics and graphs about test progress,
defect rates and performance)
Potential
risks of using tools to support testing include:
·
Expectations for
the tool may be unrealistic (including functionality and ease of use)
·
The time, cost,
and effort for the initial introduction of a tool may be under-estimated
(including training and external expertise)
·
The time and
effort needed to achieve significant and continuing benefits from the tool may
be under-estimated (including the need for changes in the test process and
continuous improvement in the way the tool is used)
·
The effort
required to maintain the test assets generated by the tool may be
under-estimated
·
The tool may be
relied on too much (seen as a replacement for test design or execution, or the
use of automated testing where manual testing would be better)
·
Version control
of test assets may be neglected
·
Relationships and
interoperability issues between critical tools may be neglected, such as requirements
management tools, configuration management tools, defect management tools and
tools from multiple vendors
·
The tool vendor
may go out of business, retire the tool, or sell the tool to a different vendor
·
The vendor may
provide a poor response for support, upgrades, and defect fixes
·
An open source
project may be suspended
·
A new platform or
technology may not be supported by the tool
·
There may be no
clear ownership of the tool (e.g., for mentoring, updates, etc.)
In
order to have a smooth and successful implementation, there are a number of
things that ought to be considered when selecting and integrating test
execution and test management tools into an organization.
Test
execution tools execute test objects using automated test scripts. This type of
tool often requires significant effort in order to achieve significant
benefits.
Capturing
tests by recording the actions of a manual tester seems attractive, but this
approach does not scale to large numbers of test scripts. A captured script is
a linear representation with specific data and actions as part of each script.
This type of script may be unstable when unexpected events occur. The latest
generation of these tools, which takes advantage of “smart” image capturing
technology, has increased the usefulness of this class of tools, although the
generated scripts still require ongoing maintenance as the system’s user
interface evolves over time.
A
data-driven testing approach separates out the test inputs and expected
results, usually into a spreadsheet, and uses a more generic test script that
can read the input data and execute the same test script with different data.
Testers who are not familiar with the scripting language can then create new
test data for these predefined scripts.
In
a keyword-driven testing approach, a generic script processes keywords
describing the actions to be taken (also called action words), which then calls
keyword scripts to process the associated test data. Testers (even if they are
not familiar with the scripting language) can then define tests using the
keywords and associated data, which can be tailored to the application being
tested. Further details and examples of data-driven and keyword-driven testing
approaches are given in ISTQB-TAE Advanced Level Test Automation Engineer
Syllabus.
The
above approaches require someone to have expertise in the scripting language
(testers, developers, or specialists in test automation). Regardless of the
scripting technique used, the expected results for each test need to be
compared to actual results from the test, either dynamically (while the test is
running) or stored for later (post-execution) comparison.
Model-Based
testing (MBT) tools enable a functional specification to be captured in the
form of a model, such as an activity diagram. This task is generally performed
by a system designer. The MBT tool interprets the model in order to create test
case specifications which can then be saved in a test management tool and/or
executed by a test execution tool (see ISTQB-MBT Foundation Level Model- Based
Testing Syllabus).
Test
management tools often need to interface with other tools or spreadsheets for
various reasons, including:
·
To produce useful
information in a format that fits the needs of the organization
·
To maintain
consistent traceability to requirements in a requirements management tool
·
To link with test
object version information in the configuration management tool
This
is particularly important to consider when using an integrated tool (e.g.,
Application Lifecycle Management), which includes a test management module (and
possibly a defect management system), as well as other modules (e.g., project
schedule and budget information) that are used by different groups within an
organization.
Effective Use of Tools
The
main considerations in selecting a tool for an organization include:
·
Assessment of the
maturity of the organization, its strengths, and weaknesses
·
Identification of
opportunities for an improved test process supported by tools
·
Understanding of
the technologies used by the test object(s), in order to select a tool that is
compatible with that technology
·
The build and
continuous integration tools already in use within the organization, in order
to ensure tool compatibility and integration
·
Evaluation of the
tool against clear requirements and objective criteria
·
Consideration of
whether or not the tool is available for a free trial period (and for how long)
·
Evaluation of the
vendor (including training, support, and commercial aspects) or support for
non-commercial (e.g., open source) tools
·
Identification of
internal requirements for coaching and mentoring in the use of the tool
·
Evaluation of
training needs, considering the testing (and test automation) skills of those
who will be working directly with the tool(s)
·
Consideration of
pros and cons of various licensing models (e.g., commercial, or open source)
·
Estimation of a
cost-benefit ratio based on a concrete business case (if required)
As
a final step, a proof-of-concept evaluation should be done to establish whether
the tool performs effectively with the software under test and within the
current infrastructure or, if necessary, to identify changes needed to that
infrastructure to use the tool effectively.
After
completing the tool selection and a successful proof-of-concept, introducing
the selected tool into an
organization
generally starts with a pilot project, which has the following objectives:
·
Gaining in-depth
knowledge about the tool, understanding both its strengths and weaknesses
·
Evaluating how
the tool fits with existing processes and practices, and determining what would
need to change
·
Deciding on
standard ways of using, managing, storing, and maintaining the tool and the
test assets (e.g., deciding on naming conventions for files and tests,
selecting coding standards, creating libraries, and defining the modularity of
test suites)
·
Assessing whether
the benefits will be achieved at reasonable cost
·
Understanding the
metrics that you wish the tool to collect and report, and configuring the tool
to ensure these metrics can be captured and reported
Success
factors for evaluation, implementation, deployment, and on-going support of
tools within an organization include:
·
Rolling out the
tool to the rest of the organization incrementally
·
Adapting and
improving processes to fit with the use of the tool
·
Providing
training, coaching, and mentoring for tool users
·
Defining
guidelines for the use of the tool (e.g., internal standards for automation)
·
Implementing a
way to gather usage information from the actual use of the tool
·
Monitoring tool
use and benefits
·
Providing support
to the users of a given tool
·
Gathering lessons
learned from all users
It is also important to ensure that the tool is technically and organizationally integrated into the software development lifecycle, which may involve separate organizations responsible for operations and/or third-party suppliers.