вівторок, 15 вересня 2015 р.

Testing Strategy Template

This is a list of ideas to consider when reviewing the software development process at your project. It gives you a base-line to compare to what you already have, and simplified understanding of what you may improve.

Стратегія Тестування.
Як на мене, подібні списки ідей потрібні, щоб посидіти й поміркувати, чи все у вас на проекті так добре працює, як хотілося б, і одразу побачити шляхи покращення у кожній фазі процесу розробки.

Testing Strategy Template



Purpose


Testing is a continuous and integrated process where all parties in the project are involved. The purpose of this Test Strategy is to create a shared understanding of the overall targets, approach, tools and timing of (not only) test activities. The objective is to achieve higher quality and shorter customer request lead times with minimum overhead, frequent deliveries, close teamwork with team and the customer, continuous integration, short feedback loops and ability of frequent changes of the design. Test strategy guides us through the common obstacles with a clear view of how to evaluate the system. Lets us look at the development process as a whole.

 

 Task's Definition of “Done”


The whole project team agrees on the conditions defining a task as “Done”, meaning it is ready to be released to production:

  1. The task/issue is formalized and created in the issue tracking system. (Jira)
  2. Complete code of the solution is committed to the Version Control System, to a code branch other than the Master branch (e.g. to Integration or development branch) according to “Commit policy”.
  3. 100% of Unit Tests pass OK.
  4. The code is reviewed (static testing) by a developer other than the author, and its quality is considered as acceptable.
  5. The solution is functionally tested against both explicit and implicit requirements, and its quality is considered as acceptable.
  6. The Product Owner (PO) on behalf of the task requester does not have objections to the solution.

Not meeting any of the above criteria stops the task from being successfully processed through the workflow, and it’s returned to the previous status.
The corresponding flow is set up in Jira to reflect the items’ transitions through the pipe-line.

Release Readiness

A Release Candidate Build, after the system stabilization, is assembled from the main code branch having merged the code of the changes - a functionality increment).
Every Release Candidate Build is subjected to regression testing of all levels applicable to the product: UI, Integration, API, components, 3rd-party integrations etc...
If the regression testing finds a bug of high severity, a new Release Candidate Build is assembled after the bug is fixed, and the regression testing is conducted again.
Before an every sprint end, there is a  “code-freeze”, when no new major changes are merged to the main code branch, not to rapidy encrease the regression testing scope.



Regression Testing Strategy


Regression Testing aims to cover functiuonality that was not directly changed but is at risk due to other changes. The risks are identified and prioritized by testers and whoever is aware of functionality dependencies.

There may be a separate Regression Test Set for every sugnificant change on any testing level. 
The prepared Regression Test Set (automated as possible) is run in this period ensuring the code integration has not broke the existing functionality.
Before every major functionality release, there may be a sprint dedicated to more thorough functionality testing of the most risky areas.


Requirements Strategy


  1. Every piece of requirements to be implemented is put to Jira and introduced in a “User Story” format, connecting it to a real use-cases, underlining a value for an end-user benefiting from the story implementation.
  2. Testing starts with exploration of requirements by elaborating on the task from different perspectives. Exploration is held by the whole team, including PO, until the team agrees with the user story scope and estimation.
  3. The team always implements highest priority work items first. Each new work item is prioritized by PO and added to the backlog in Jira.
  4. PO may re-prioritize or remove work items from product backlog at any time.

Quality and Test Objectives


Quality Feature
Description
Measure and Target
Functionality
Features and functions conform to requirements (both explicit and implicit).
No high severity bugs on System Testing level.
Reliability
The service is available 24/7 for usage and administration. The system can recover after a component fault in time, not notable for a user.
System is available for 99.99% for the time measured through system logs.
Usability & Learnability
Users can use the service intuitively, learning how to use new functions with ease.
Customer support service does not get massively contacted by users after a new feature release.
Efficiency & Performance

Responsiveness of the system under a given load and the ability to scale to meet growing demand.
Load testing is regularly conducted on System Testing level, and discovered issues are addressed and prioritized in the backlog.
Maintainability

  1. Ease to add features, correct defects or release changes to the system.
  2. Adopted VCS commit policy.
  3. Transparent release procedures.
  • Code Duplication < 5%
  • Code Complexity < 8
  • Unit Test Coverage > 80%
  • Method Length < 20 Lines
  • All logs of all environments are accessible by dev team.
Portability
The service is available in all browsers/systems requested by business.
Browser/System: Firefox, Chrome, Safari/iOS, IE11/Win8, IE10/Win7

Testing Scope


Testers are free to propose any testing type if applicable to the task. So formally no testing types are out of scope.
Testing type
Definition
Environment to run on
Automated/Manual
Unit testing
Testing that verifies the implementation of software elements in isolation
Development, Integration
Automated
Code review and code analysis (static and dynamic)
Walkthrough and code analysis
Development
Automated, Manual
Functional and Feature testing
Testing an integrated hardware and software system to verify that the system meets required functionality:
100% requirements coverage
100% coverage of the main flows
100% of the highest risks covered
Operational scenarios tested
Operational manuals tested
All failures are reported

Integration, Staging
Manual
Smoke testing
Functional check list, run on every available code set (build) to verify if the code and environment is consistent enough to continue to other types of testing.
Development, Integration
Automated
System testing
Testing the whole system with end to end flow
Integration, Staging
Automated, Manual
Regression testing
Testing all the prior features and re-testing previously closed bugs
Development, Integration, Staging
Automated, Manual
Security testing
Verify secure access, transmission and password/ session security
Staging, Production
Automated
Cross-browser testing
Testing on each supported platform/ browser
Integration, Staging
Automated, Manual
Performance and Availability testing
Production environment availability and responsiveness.
Load and stress testing to know the system capacity.
Staging, Production
Automated
Acceptance testing
Testing based on acceptance criteria to enable the customer to determine whether or not to accept the system
Staging, Production
Automated, Manual



Test Environments

Name
Description
Data setup
Development



Browser side: Virtual machines with all required versions of browser/OS are available for all team members.

Server side: This environment is configured and used by a developer/tester, but may be easily shared. Base versions of the environment are created as often as necessary. Each team member may check out the latest base version and customize it. The code is based on the version/branch being developed.
Uses anonymized* and cut production data.
Integration
This environment supports Continuous Integration of code changes and execution of automated tests and code analysis.
Anonymized* and cut production data.
Automated checks generate its own test data, and clear it on finish.
Staging
This environment supports exploratory and other manual testing
Partially anonymized* but full production data used.
Production
Live environment

* Production data anonymization takes daily production DB dump, replacing users and partners sensitive data with some test values, but archiving data structure.

  Test Design Strategy

Test Designing means prioritizing of an infinite list of all the possible inputs by the risk of not knowing the actual output. It's a process of identifying and formalizing Test Cases and Test Suites (in forms of Check Lists, mind maps, etc.) in regards to risk weights, i.e. the value customers might loose if the test is not conducted; or in other words, the risk of unawareness of the unwanted behavior of the system.
Each Test Case is explicitly related to a piece of Requirements, as well as to Test Suites it is run under.
All Test Suites have their own goals and are intended for particular testing phases:
    Testing type
    Test Suite Name Test Suite Intention
    Feature Testing New Functionality Test Suite New features' functionality created/updated in the current sprint.
    Integration TestingSprint regression test Suite Dependant functionality, that has not been directly changed.
    System Testing System Regression Test Suite
    Build Smoke Test Suite
    The system functionality as a whole


     The following techniques are used in every occurrence of test designing:

  • Specification based / Black box techniques: Equivalence classes, Boundary value analysis, Decision tables, State Transitions and Use case testing
  • Experience based techniques: Session Based Exploratory testing  

  • Structure based / white box techniques (mostly related to Unit Tests, but may refer to other testing types): Statement coverage, Decision coverage, Condition coverage and Multi condition coverage


Test Execution Strategy

We will keep in mind the following points:
  1. Testers cannot rely on having complete specification.
  2. Testers should be embedded in agile team to contribute in any way then can.
  3. Be prepared to work closely with developers.
  4. Focus on value added activities.
  5. Focus on What and Not How to test.
  6. Focus on sufficient and straightforward situations.
  7. Focus on exploratory testing.
  8. ….

Test Automation Strategy

The automation checks are divided by purpose, a separate check running process is set up for each:
  1. Production checks maintaining live data consistency, service availability.
    • Run by schedule: hourly, daily, weekly.
  2. Development checks, incorporating all Unit, Smoke, Functional,  Acceptance and other automated checks to run on any environments after a code/executable change.
    • Continuous Integration Server (CI) is set up to regularly check VCS code branches for changes and run respective Development checks.
    • To separate data-changing tests and not allow them on Production environment, all tests are marked as “Non-destructive” (suitable for Production) and “Destructive”(run only on the Development, Integration and Staging environments; dangerous for production data).
We select a task for automation based on the following factors:
  • What are the risks of the task not being automated?
  • How long it takes to run the tests manually?
  • How easy are the test cases to automate?
  • How many times is the test expected to run in project?

Test Management

The Test Plan, test scenarios and test cases should be stored and handled in a same system.
Bugs, Tasks, Improvements, Stories may be stored in another system. But the items in both systems should be linked.

Release Procedures Strategy

Every Version Control commit must be linked to a single work item, the change was dedicated to. No changes are allowed in a release that are not committed to VCS, as well as no commits are allowed in a release that are not linked to a work item in the correct workflow status.
Transparent release procedures. Although establishing a code roll out during the release may be beyond the development team responsibilities, the release procedure and its steps should be described well enough to be accurately reproduced on Staging or other environment. As a lot of release issues arise exactly from unknown deployment steps.

Risks and Assumptions

Risks and assumptions raised in Daily stand up meeting (in front of all team members) should be handled and addressed by the ScrumMaster.

Defect Management Strategy

Ideally, defects are only raised and recorded when they are not going to be fixed immediately.
Each bug report must contain the conditions under which it occurred, so that the defect can be easily reproduced, fixed and the fix re-tested. The defect severity must be stated by tester, or if in doubt by PO, so the defect is correctly prioritized and addressed.


I have written this document template, basing on my professional experience and a template from Inder P Singh's blog. As he states, in case of professional usage or sharing, please leave a reference to his blog.

Немає коментарів: