Version 5 (modified by deyan, 16 years ago) (diff) |
---|
Important note: This page is being worked on. You should regularily check for updates.
How to write manual tests
This document contains requirements and guidelines for writing good manual test cases. Here you will find information about what should be tested and how to use our Testlink server. Rules for reviewing will be provided as well. When writing manual tests, do not forget the general guideline of the project: Work towards the goal!
Test cases
We use a Testlink server for writing, executing and tracking manual test cases. The homepage of the Testlink project is http://testlink.sourceforge.net/docs/testLink.php.
Here follow some basic rules about how to write test cases:
- Each use case must be decomposed to simple (single action) steps.
- The use cases must cover all of the Sophie2 functionality.
- Every use case must consist of at most 15 steps.
- When you write a new use case, make sure that it does not already exist.
- Use cases must be organized by categories.
The following basic test plan should be executed on every iteration. It consists of basic functionality that should always work:
- Open Sophie2
- Create/open a new book
- Add/delete pages
- Add/delete frames
- Save/close the book
- Exit Sophie2
In the progress of testing, this plan will be expanded and more test plans will be added.
Reporting bugs
Reviewing
This section contains rules for reviewing manual testing scenarios as well as for doing and reviewing the testing phase of tasks.
Rules
Here follows a description of the contents of the Testing section of a task's wiki page. It should contain:
- A link to the user documentation describing this task.
- A link to the release documentation if the result of this task will be part of the next release.
- Links to use cases in Testlink where applicable.
- related test cases should be considered and listed as well.
- Links to all auto tests related to this tasks.
- see PLATFORM_STANDARDS_AUTO_TESTS for more information on automatic testing.
- (recommended) A link to the Hudson test report regarding these tests.
- Explanations of the results of the tests.
- A brief explanation of the bugs reported with links to their trac tickets.
- links to related bugs should be provided as well.
Scoring
The testing reviewer should make sure everything listed in the above section is ok - the documentation is well written, manual testing scenarios cover all aspects of the task, bugs are adequately reported, etc. Reviewers should either follow the standards in this document or comment them in the Comments section of this page. If you state a task does not comply with the standards, point to the requirements that are not met. Scores are in the range 1-5. Here are the rules for scoring the testing phase:
- Score 1 (fail): The testing phase is not structured according to the standards (or is to very little extent).
- Score 2 (fail): The testing phase is structured according to the standards in the most part but has some things that are missing or bugs that are not linked or explained or test cases are not suitable, etc. - in general - the testing does not cover all aspects of the functionality.
- Score 3 (pass): The testing phase is structured according to the standards, covers the functionality but lacks some descriptions and more things can be added.
- Score 4 (pass): The testing phase is structured according to the standards and provides detailed information according to the requirements mentioned above.
- Score 5 (pass): The testing phase is structured according to the standards and there's nothing more to be added - it's perfect in such a way that a person who is not quite familiar with the project can clearly see that the feature(s) is/are implemented really well.
All reviews should be motivated. A detailed comment about why the testing phase fails is required. For a score of 3 a list of things that could be better should be provided. Comments are encouraged for higher scores as well. Non-integer scores are STRONGLY disencouraged. If you give the testing a score of 3.5, then you probably have not reviewed it thoroughly enough and cannot clearly state whether it is good or not. Once the testing phase has been reviewed, it cannot be altered. If you think it is wrong, you should request a super review. Currently all super reviews should be discussed with Milo. Make sure you are able to provide clear arguments of what the testing lacks before you request a super review.
Comments
Your comment here --developer.id@YYYY-MM-DD
- Test cases may contain prerequisites - of course opening Sophie and creating a new book won't be a step in every test case. They should be marked as Step0
- Talking about "obeying rules" in bug reports is a good wish, but does not make any sense since we are allowing people to commit bugs on their own. Unless you think someone will go after and moderate tickets, this is useful. Requirements to bug reporting should be minimalistic as reporting bugs will be done not only by people with technical knowlege. This also applies to bug categories where we definitely need an uncategorized.
- Using bold and italics is not used by many developers, probably using them here makes this page a little inconsistant with all of the wiki contents. We use headings and bullets.
- Sentences like "When writing manual tests, do not forget the general guideline of the project: Work towards the goal! " are not in place - this is a serious project, not a daycare.
- Sentences like "Non-integer scores are STRONGLY disencouraged." are not in place - this is a serious project, not a daycare. You do not have to SHOUT to get understood.
--deyan@2009-02-05
- Here is a goos example for a bug report form: https://bugs.opera.com/wizard/. So simple. I propose making a similar form. --kyli@2009-02-16