What makes a good test engineer?
What makes a good Software QA engineer?
What makes a good QA or Test manager?
What's the role of documentation in QA?
What's the big deal about 'requirements'?
What steps are needed to develop and run software tests?
What's a 'test plan'?
What's a 'test case'?
What should be done after a bug is found?
What is 'configuration management'?
What if the software is so buggy it can't really be tested at all?
How can it be known when to stop testing?
What if there isn't enough time for thorough testing?
What if the project isn't big enought to justify extensive testing?
What can be done if requirements are changing continuously?
What if the application has functionality that wasn't in the requirements?
How can Software QA processes be implemented without stifling productivity?
What if an organization is growing so fast that fixed QA
processes are impossible?
How does a client/server environment affect testing?
How can World Wide Web sites be tested?
How is testing affected by object-oriented designs?
What makes a good test engineer?
A good test engineer has a 'test to break' attitude,
an ability to take the point of view of the customer, a strong
desire for quality, and an attention to detail. Tact and diplomacy
are useful in maintaining a cooperative relationship with developers,
and an ability to communicate with both technical (developers) and
non-technical (customers, management) people is useful. Previous
software development experience can be helpful as it provides
a deeper understanding of the software development process,
gives the tester an appreciation for the developers' point
of view, and reduce the learning curve in automated test
tool programming. Judgement skills are needed to assess high-risk
areas of an application on which to focus testing efforts
when time is limited.
Return to top of this page's FAQ list
What makes a good Software QA engineer?
The same qualities a good tester has are useful for a QA
engineer. Additionally, they must be able to understand
the entire software development process and how it can fit
into the business approach and goals of the organization.
Communication skills and the ability to understand various sides
of issues are important. In organizations in the early stages of
implementing QA processes, patience and diplomacy are
especially needed. An ability to find problems as well as
to see 'what's missing' is important for inspections
and reviews.
Return to top of this page's FAQ list
What makes a good QA or Test manager?
A good QA, test, or QA/Test(combined) manager should:
Return to top of this page's FAQ list
What's the role of documentation in QA?
Critical. (Note that documentation can be electronic, not
necessarily paper.) QA practices should be documented
such that they are repeatable. Specifications, designs,
business rules, inspection reports, configurations, code
changes, test plans, test cases, bug reports, user manuals, etc.
should all be documented. There should ideally be a system for
easily finding and obtaining documents and determining
what documentation will have a particular piece of information.
Change management for documentation should be used if
possible.
Return to top of this page's FAQ list
What's the big deal about 'requirements'?
One of the most reliable methods of insuring problems,
or failure, in a complex software project is to have poorly
documented requirements specifications. Requirements
are the details describing an application's
externally-perceived functionality and properties.
Requirements should be clear, complete, reasonably
detailed, cohesive, attainable, and testable.
A non-testable requirement would be, for
example, 'user-friendly' (too subjective). A testable
requirement would be something like 'the user must
enter their previously-assigned password to access the
application'. Determining and organizing requirements details
in a useful and efficient way can be a difficult
effort; different methods are available
depending on the particular project. Many
books are available that describe various
approaches to this task. (See the
Bookstore section's
'Software Requirements Engineering' category
for books on Software Requirements.)
Care should be taken to involve ALL of a project's significant 'customers' in the requirements process. 'Customers' could be in-house personnel or out, and could include end-users, customer acceptance testers, customer contract officers, customer management, future software maintenance engineers, salespeople, etc. Anyone who could later derail the project if their expectations aren't met should be included if possible.
Organizations vary considerably in their handling of requirements specifications. Ideally, the requirements are spelled out in a document with statements such as 'The product shall.....'. 'Design' specifications should not be confused with 'requirements'; design specifications should be traceable back to the requirements.
In some organizations requirements may end up in high level project plans, functional specification documents, in design documents, or in other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by testers in order to properly plan and execute tests. Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly.
Return to top of this page's FAQ list
What steps are needed to develop and run software tests?
The following are some of the steps to consider:
Return to top of this page's FAQ list
What's a 'test plan'?
A software project test plan is a document that describes
the objectives, scope, approach, and focus of a software
testing effort. The process of preparing a test plan
is a useful way to think through the efforts needed to
validate the acceptability of a software product. The
completed document will help people outside the test
group understand the 'why' and 'how' of product validation.
It should be thorough enough to be useful but not so
thorough that no one outside the test group will read it.
The following are some of the items that might be
included in a test plan, depending on the particular project:
Return to top of this page's FAQ list
Return to top of this page's FAQ list
What should be done after a bug is found?
The bug needs to be communicated and assigned to
developers that can fix it. After the problem is resolved,
fixes should be re-tested, and determinations made regarding
requirements for regression testing to check that fixes
didn't create problems elsewhere. If a problem-tracking system
is in place, it should encapsulate these processes. A variety
of commercial problem-tracking/management software tools
are available (see the 'Tools' section
for web resources with listings of such tools). The following
are items to consider in the tracking process:
Return to top of this page's FAQ list
What is 'configuration management'?
Configuration management covers the processes used to control,
coordinate, and track: code, requirements, documentation,
problems, change requests, designs, tools/compilers/libraries/patches,
changes made to them, and who makes the changes. (See the
'Tools' section for web resources with
listings of configuration management tools. Also see
the Bookstore section's
'Configuration Management' category for
useful books with more information.)
Return to top of this page's FAQ list
What if the software is so buggy it can't really be tested at all?
The best bet in this situation is for the testers to go through
the process of reporting whatever bugs or blocking-type problems
initially show up, with the focus being on critical bugs. Since
this type of problem can severely affect schedules,
and indicates deeper problems in the software development
process (such as insufficient unit testing or insufficient
integration testing, poor design, improper build or release
procedures, etc.) managers should be notified, and provided
with some documentation as evidence of the problem.
Return to top of this page's FAQ list
How can it be known when to stop testing?
This can be difficult to determine. Many modern software
applications are so complex, and run in such an interdependent
environment, that complete testing can never be done. Common
factors in deciding when to stop are:
Return to top of this page's FAQ list
What if there isn't enough time for thorough testing?
Use risk analysis to determine where testing should be focused.
Since it's rarely possible to test every possible aspect of an
application, every possible combination of events, every
dependency, or everything that could go wrong, risk analysis
is appropriate to most software development projects. This requires
judgement skills, common sense, and experience. (If warranted,
formal methods are also available.) Considerations can include:
Return to top of this page's FAQ list
What if the project isn't big enough to justify extensive testing?
Consider the impact of project errors, not the size of
the project. However, if extensive testing is still not justified,
risk analysis is again needed and the same considerations as
described previously in 'What if there isn't enough time for thorough testing?'
apply. The tester might then do ad hoc testing, or write
up a limited test plan based on the risk analysis.
Return to top of this page's FAQ list
What can be done if requirements are changing continuously?
A common problem and a major headache.
Return to top of this page's FAQ list
What if the application has functionality that wasn't in the requirements?
It may take serious effort to determine if an application
has significant unexpected or hidden functionality, and it would
indicate deeper problems in the software development process.
If the functionality isn't necessary to the purpose of the
application, it should be removed, as it may have unknown impacts
or dependencies that were not taken into account by the designer
or the customer. If not removed, design information will be
needed to determine added testing needs or regression testing needs.
Management should be made aware of any significant added risks as a
result of the unexpected functionality. If the functionality only
effects areas such as minor improvements in the user interface, for
example, it may not be a significant risk.
Return to top of this page's FAQ list
How can Software QA processes be implemented without stifling productivity?
By implementing QA processes slowly over time, using
consensus to reach agreement on processes, and adjusting and
experimenting as an organization grows and matures, productivity
will be improved instead of stifled. Problem prevention will
lessen the need for problem detection, panics and burn-out
will decrease, and there will be improved focus and less
wasted effort. At the same time, attempts should be made to
keep processes simple and efficient, minimize paperwork,
promote computer-based processes and automated tracking and
reporting, minimize time required in meetings, and promote
training as part of the QA process. However, no one - especially
talented technical types - likes rules or bureacracy, and
in the short run things may slow down a bit. A typical
scenario would be that more days of planning and development
will be needed, but less time will be required for late-night
bug-fixing and calming of irate customers.
(See the Bookstore section's
'Software QA', 'Software Engineering', and 'Project Management'
categories for useful books with more information.)
Return to top of this page's FAQ list
What if an organization is growing so fast that fixed QA
processes are impossible?
This is a common problem in the software industry, especially
in new technology areas. There is no easy solution in this
situation, other than:
Return to top of this page's FAQ list
How does a client/server environment affect testing?
Client/server applications can be quite complex due to
the multiple dependencies among clients, data communications,
hardware, and servers. Thus testing requirements can be
extensive. When time is limited (as it usually is) the
focus should be on integration and system testing. Additionally,
load/stress/performance testing may be useful in determining
client/server application limitations and capabilities.
There are commercial tools to assist with such testing.
(See the 'Tools' section for
web resources with listings that include these kinds of test
tools.)
Return to top of this page's FAQ list
How can World Wide Web sites be tested?
Web sites are essentially client/server applications -
with web servers and 'browser' clients.
Consideration should be given to the interactions between
html pages, TCP/IP communications, Internet connections,
firewalls, applications that run in web pages (such
as applets, javascript, plug-in applications), and
applications that run on the server side (such as cgi
scripts, database interfaces, logging applications,
dynamic page generators, etc.). Additionally, there are
a wide variety of servers and browsers, various
versions of each, small but sometimes significant
differences between them, variations in connection
speeds, rapidly changing technologies, and multiple
standards and protocols. The end result is that
testing for web sites can become a major ongoing effort.
Other considerations might include:
Some usability guidelines to consider - these are subjective and may or may not apply to a given situation (Note: more information on usability testing issues can be found in articles about web site usability in the 'Other Resources' section):
Return to top of this page's FAQ list
How is testing affected by object-oriented designs?
Well-engineered object-oriented design can make it easier
to trace from code to internal design to functional design
to requirements. While there will be little affect on black
box testing (where an understanding of the internal design
of the application is unnecessary), white-box testing
can be oriented to the application's objects. If the
application was well-designed this can simplify test design.
Return to top of this page's FAQ list
About the Software QA and Testing Resource Center and its author
Send any comments/suggestions/ideas to: Rick Hower
© 1996-2000 by Rick Hower
Last revised: 1/2/2000