Testing

Jan 20
22:00

2002

Richard Lowe

Richard Lowe

  • Share this article on Facebook
  • Share this article on Twitter
  • Share this article on Linkedin

One of the hardest tasks for any IT ... is that of ... is, in my ... the area where we have the ... (next to ... the cost and length of a project) andwhere

mediaimage

One of the hardest tasks for any IT department is that of testing.
This is,Testing Articles in my experience, the area where we have the greatest
failures (next to estimating the cost and length of a project) and
where we have incredibly room for improvement.

I have been in the business for a long, long time, and the lack of
testing never fails to amaze me. There have been times when I've
received "finished" programs from developers which didn't even
run! Obviously the code had never been tested, at least not in any
meaningful way.

Before any testing can begin (and obviously this should also be
done before coding starts) you must have a thorough analysis and
design. You see, a program or system must be tested against the
specification and a set of standards. It cannot be done
arbitrarily or randomly.

Your specification explains what your systems are trying to
accomplish. The specification might say something like "a
standard URL will be accepted in the address field". Your
standards would state that all buffers must be checked for overrun
conditions, URLs in a valid format, and so on. The standards apply
to ALL testing, while the specifications apply to the specific
program or system.

A very critical fact (which seems to be completely unknown to
Microsoft) is the marketing department is not in charge of
testing. To be done correctly, testing actually requires
top-notch people who have been specially trained and who are
highly motivated to do their jobs well.

You also cannot make a hundred thousand copies of a product and
send it out to tens of thousands of beta testers without a clear
set of goals, expert supervision and constant management and
expect to get anything meaningful back. Beta testing is vital to a
project, but it does not and cannot replace professional testing
staff. Another fact which seems to be invisible to Microsoft is
the purpose of beta testing is to test, not to market a product.
Marketing is an essential part of a product plan, but it has
absolutely no place in the testing plan.

What are some of the common testing mistakes?

Testing to prove a program or system works - I know you want
your programs to work, but the purpose of testing is simply to
test, not to prove you are the best programmer on the planet.
Testing needs to hit a program hard, right between the eyes. Your
job as a tester is to ensure that the program meets the
specifications, and that any deficiencies are found and properly
recorded.

Trying to prove a program does not work - Again, the purpose of
testing is to test, not to prove anything. You should always have
a well defined testing plan and follow that plan.

Using testing to prototype a product - Prototyping is an extremely
useful part of the analysis and design phases of a project. The
purpose is to give your users and customers a way to see what
something will look and feel like before implementing a project.
Once design is done prototypes should be thrown away and not used
again.

Using testing to design performance - Performance goals must be
understood before a project leaves the design phase. By the time a
project is implemented (much less tested) you should completely
know how it will perform (minus the possibility of bad
programming, which is a different problem which testing is
designed to uncover). Testing will, however, validate that the
product does perform as indicated in the specifications.

Testing without a test plan - I don't know how many programmers
I've seen that just wade right in and start testing. Come on
people, how can you test something if you don't have a plan? What
are you trying to prove?

Testing without a specification - Remember, the purpose of
testing is to prove that a system or program meets the
specifications. That's all. It's very difficult to do that
without a specification right in front of you. Of course, this
assumes that you have a specification to begin with ...

Asking the developers to test their own programs - This is one of
the biggest mistakes (next to writing any code without a very good
specification) that you can make. How can a programmer test his or
her own code? First of all, programmers make lousy testers -
testing is a field all to itself and programmers are almost never
trained well in this area. Second, the developers of a system have
a conflict of interest - they want their software to work. Testers
need to approach with a more open mind.

Testing without a goal - If you don't have a goal in mind for your
testing, you don't know when you are done. What are you trying to
accomplish? Absolutely no bugs of any kind (not very practicable)?
The best goal is 100% compliance with the specifications. This
does pit the burden on the analysis and design team - but that is
exactly where the responsibility lies.

Expecting an unsupervised beta testing group to do anything
meaningful - Beta testers need well defined goals, constant
supervision and strong leadership to be successful. Without these
things beta testing is simply a numbers game which does nothing
useful at all.

Testing is not the appropriate time to make design decisions -
Something that I commonly see from Microsoft and other large
companies is they create a product and send it out to their beta
testers for feedback. Guys, come on. Beta testing is not the
place for this. Design decisions need to be made well before a
product is sent out for testing. You want to find out if your
users will like a feature? Create a prototype, send it to a
statistically valid sample audience and ask them for their
opinions. Clearly define it to the audience as a prototype and
survey them for their opinions afterwards. Don't send out a poorly
defined "beta test" to a hundred thousand people and try and get
their opinions on features. The only thing you are going to
accomplish is to get yourself slammed in the media. You also find
yourself making design changes in a product at the wrong stage of
the product life cycle. Design changes need to be made during the
analysis and design phases of the project, not AFTER
implementation.

So how is good testing done?

Analysis and design must be done first - No matter how large the
project, you will be much more likely to succeed if you do these
two steps thoroughly and completely before implementation and
testing. Many years ago I had a boss, name of Gary, who didn't
understand this simple rule. He asked me to start implementing a
warehousing system for a client without writing a specification
(over my objections). His concept of design was to spend an hour
or two asking the customer what was needed, then to start coding,
then to show the customer, make changes, code some more, show the
customer, make changes and so on until the customer said "it's
fine". Needless to say, the project took far longer than necessary
and did not do a great job of meeting the customers needs. It was
also very buggy and required an immense amount of support during
the first couple of years of it's life cycle.

The only phase where the marketing department should be involved
is analysis - A well trained analyst understands that the
marketing department is a customer and must be included in the
analysis phase of the project. This is the only time (until the
product is through testing) that marketing should have any input.
If you don't follow this rule you will wind up with a product
which changes direction during testing, and thus invalidates your
test.

Understand that a specification is a contract - The goal is to
implement something that meets the specification. This is the only
way that I know of to produce a software project that (a) gets
finished at all and (b) meets the customers expectations. Of
course, this assumes that your analysis and design is top notch.
What does this really mean? It simply means that changes to the
design are only allowed during the analysis and design phase.
Period. If your customer changes anything at all after the
analysis and design, you must reanalyze, redesign and
renegotiate - always and without fail.

Let's say you are the contractor who has been hired to create a
new warehouse system. You do your analysis and design and it is
approved by the customer. You now have a contract and it is
important that your customer understands this from the beginning.
Okay, you begin the project and your customer decides he wants to
add bar coding. This seems pretty simple so you say "sure". Wrong
thing to do. You should say either "let's finish the project as
designed then add things" or "okay, we will need to stop, see how
that effects the project (at the customers cost), then we will
submit a cost and new delivery date".

Maintain standards - Testing measures the implementation against
the specifications and standards. Standards should be made known
to the customer as part of the entire package. These might include
things like all fields will be validated in specific ways, all
buffers will not overflow, screens will have a certain look and so
on.

Remember the purpose of testing - Testing should prove the
implementation meets everything included in the specifications and
standards. Testing does NOT mean the product is measured against
customer expectations (that's a marketing function which should
have been nailed down during the analysis and design phase). You
see, the specification MUST meet the customer expectations before
implementation beings. Then the final product WILL meet customer
expectations as the specification is the expectation.