A Reader Question: "How should we test the quality of a User Guide?"

Technical writing

Testing the quality of a user guide is a topic not addressed too frequently.

I have received the following inquiry from a regular reader of mine:

QUESTION:

“I would like your opinion on how do you or your company ensure that only complete and quality documentation is delivered to the customers.

I am asking this because of a scenario where peer reviews and SME reviews fail to contribute effectively to improve quality. An idea suggested to ensure top quality for documentation is regular testing of user guides and online help by testers. This is a very good step, but for me the problem starts with what the testers ‘test’ in these documents.

I feel that if the tester can complete a task based on the procedure in the user manual, then it is the most effective testing that can be achieved. Most users would want to finish the task based on the information in the user documentation and without facing any issues. If that effort fails and the user manuals fail to provide information for users to solve the issues, then there is a problem with the information in the user manuals.

I also do not understand why users would want to know what the software does in the background. For example, when I login to my webmail, all I am worried about is how quickly I can access my inbox. I am not worried about what the webmail service does when I click on the Login button, or how the program validates my user credentials. So, do you think that a tester should check whether the functional information covered in the user manual is adequate or not and all about the possible error scenarios?

I feel that if SME reviews and peer reviews can be done effectively, a major portion of the issues can be solved. A tester should only test a user manual for the ease with which tasks can be completed. And I don’t think errors or omissions in the user manual can be termed as “bugs” in the conventional sense of the word.”
And here is my answer:

ANSWER:

Having professional testers go through a user manual before it is shipped to the end user is an excellent idea.

However, in terms of industry realities (especially in the software sector) that kind of a solution is really a luxury that most technical writing departments cannot afford. Testers are usually professionals working hard under a great deal of pressure and against firm deadlines. “Testing a document” is usually not a part of their job description.

Instead, the “testing” is usually done by one the following three groups:

1) Peer review of other writers;

2) SME (Subject Matter Experts); and

3) The end users.

The first two options should be preferred to the end user testing option since the cost of finding any errors by the users can be rather high. All errors in documentation should be found (and corrected) before the document is released to the market.

If the testers could be convinced to “test” a user manual, just checking out the procedural tasks should be enough. A user manual should largely consists of the most frequently used user functions. When it starts going into the behind-the-scenes functionalities and technical details, we usually call such a document a System Administrator’s Guide (or, System Admin Guide, for short) and not a User Guide.
When testers test the technical functionality that lay behind the interface we call that either “unit testing” (i.e., testing whether an isolated code unit or component works as it’s supposed to) or a “system testing” or “system integration testing” (i.e., whether the code unit or component in question works in harmony with the other components of the overall system or product).

If and when a tester puts a user guide through its paces it would probably be called a special kind of “unit testing,” the “unit” here referring to the manual/guide itself.

If there are any further questions on this issue I’d be happy to answer them as well.