Changes between Initial Version and Version 1 of PLATFORM_STANDARDS_MANUAL_TESTS


Ignore:
Timestamp:
01/26/09 22:19:45 (16 years ago)
Author:
boyan
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • PLATFORM_STANDARDS_MANUAL_TESTS

    v1 v1  
     1[[BackLinksMenu]] 
     2[[PageOutline]] 
     3 
     4'''Important note:''' This page is being worked on. You should regularily check for updates. 
     5 
     6= How to write manual tests = 
     7This document contains requirements and guidelines for writing good manual test cases. Here you will find information about what should be tested and how to use our [http://sophie2.org/testlink Testlink] server. Rules for reviewing will be provided as well. When writing manual tests, do not forget the general guideline of the project: '''Work towards the goal! ''' 
     8 
     9== Test cases == 
     10We use a [http://sophie2.org/testlink Testlink] server for writing, executing and tracking manual test cases. The homepage of the Testlink project is http://testlink.sourceforge.net/docs/testLink.php. 
     11 
     12Here follow some basic rules about how to write test cases: 
     13 * Each use case must be decomposed to simple (single action) steps. 
     14 * The use cases must cover all of the Sophie2 functionality. 
     15 * Every use case must consist of at most 15 steps. 
     16 * When you write a new use case, make sure that it does not already exist.  
     17 * Use cases must be organized by categories. 
     18 
     19The following basic test plan should be executed on every iteration. It consists of basic functionality that should always work: 
     20 * Open Sophie2 
     21 * Create/Open a new book 
     22 * Add/delete pages 
     23 * Add/delet frames 
     24 * Save/Close the book 
     25 * Exit Sophie2 
     26 
     27In the progress of testing, this plan will be expanded and more test plans will be added. 
     28 
     29== Reporting bugs == 
     30We use our [http://sophie2.org/trac/ Trac] to report ant track bugs. The homepage of the Trac project is http://trac.edgewall.org/. 
     31 
     32In order to report a new bug, just add a new ticket. Fill the ticket fields obeying the following rules: 
     33 * The name of the ticket should be in capital letters and should start with BUG_, followed by a couple of words briefly describing the bug (e.g. BUG_PRO_LIB_OWN). 
     34 * The description should be short but explanatory. This description will be expanded in the Analysis section of the bug's wiki page. You should link the wiki page in the description. 
     35 * You may leave the Assign to and Priority fields to their defaults. 
     36 * Make sure you have selected ''bug'' in the Type field. 
     37 * Select the current milestone from the drop-down menu and the version (2.0). 
     38 * Select the component this bug belongs to. When unable to tell, select ''uncategorized''. 
     39 * You should estimate the importance of the bug and fill it. 
     40 * Also estimate the effort that it will take to fix the bug. 
     41 * The last thing to add is fill your name in the analysis owners field. 
     42 
     43Once you have created the page, you should fill the Analysis section according to [wiki:PLATFORM_STANDARDS_ANALYSIS#BugFix]. 
     44 
     45Think well of a category this bug may belong to. Current suggestions are: 
     46 * data loss 
     47 * memory leak 
     48 * low performance 
     49 * regression (previously working feature is now broken), 
     50 * exception causing 
     51 * unexprected behaviour 
     52 
     53You may add a new suggestion here. After some bugs are categorized, a new custom field with this category will be added to the Trac in order to ease the bug tracking. 
     54 
     55== Reviewing == 
     56This section contains rules for reviewing manual testing scenarios as well as for doing and reviewing the testing phase of tasks. 
     57 
     58=== Rules === 
     59Here follows a description of the contents of the Testing section of a task's wiki page. It should contain: 
     60 * A link to the user documentation describing this task. 
     61 * A link to the release documentation if the result of this task will be part of the next release. 
     62 * Links to use cases in [http://sophie2.org/testlink Testlink] where applicable. 
     63  * related test cases should be considered and listed as well. 
     64 * Links to all auto tests related to this tasks. 
     65  * see [wiki:PLATFORM_STANDARDS_AUTO_TESTS] for more information on automatic testing. 
     66  * (recommended) A link to the Hudson test report regarding these tests. 
     67 * Explanations of the results of the tests. 
     68 * A brief explanation of the bugs reported with links to their trac tickets. 
     69  * links to related bugs should be provided as well. 
     70 
     71=== Scoring === 
     72The testing reviewer should make sure everything listed in the above section is ok - the documentation is well written, manual testing scenarios cover all aspects of the task, bugs are adequately reported, etc. Reviewers should either follow the standards in this document or comment them in the ''Comments'' section of this page. If you state a task does not comply with the standards, point to the requirements that are not met. Scores are in the range 1-5. Here are the rules for scoring the testing phase: 
     73 * Score 1 (fail): The testing phase is not structured according to the standards (or is to very little extent). 
     74 * Score 2 (fail): The testing phase is structured according to the standards in the most part but has some things that are missing or bugs that are not linked or explained or test cases are not suitable, etc. - in general - the testing does not cover all aspects of the functionality. 
     75 * Score 3 (pass): The testing phase is structured according to the standards, covers the functionality but lacks some descriptions and more things can be added. 
     76 * Score 4 (pass): The testing phase is structured according to the standards and provides detailed information according to the requirements mentioned above. 
     77 * Score 5 (pass): The testing phase is structured according to the standards and there's nothing more to be added - it's perfect in such a way that a person who is not quite familiar with the project can clearly see that the feature(s) is/are implemented really well. 
     78 
     79All reviews should be motivated. A detailed comment about why the testing phase fails is required. For a score of 3 a list of things that could be better should be provided. Comments are encouraged for higher scores as well. Non-integer scores are STRONGLY disencouraged. If you give the testing a score of 3.5, then you probably have not reviewed it thoroughly enough and cannot clearly state whether it is good or not. Once the testing phase has been reviewed, it cannot be altered. If you think it is wrong, you should request a super review. Currently all super reviews should be discussed with Milo. Make sure you are able to provide clear arguments of what the testing lacks  before you request a super review. 
     80 
     81= Comments = 
     82 ^Your comment here --developer.id@YYYY-MM-DD