Workflow

Prioritize Backlog (Pre-Release Planning)

  • Full team meeting to review pipeline backlog to review ticket specs and assign them to the next release.

  • Immediate priority is used to signify items that should be included in next sprint to insure they are completed for the release.

  • High priority is used to signify items that can be included in next sprint, but should definitely be delivered in the release.

  • Medium priority is used to signify items that can be included in the release after all Immediate and High priority items have been included, but may not be delivered in the release.

  • Low priority is used to signify items that can be included in the release after all Immediate, High, and Medium priority items have been included, but is likely that it will not be delivered in the release.

Sprint Kickoff

  • Each sprint starts with a full team meeting to review tickets to assign to the sprint.

  • Discuss

    • Overall goals for the sprint.

    • Roles and responsibilities.

    • Schedule.

  • Review ticket

    • Priority

    • Scope

    • Requirements

      • Complete

      • Unambiguous

      • Not conflicting with other requirements

      • Scope and sized correctly

      • Testable

    • UAT: Define basic definition of done for each ticket in sprint.

      • Create basic user scenarios, test cases… that satisfy basic criteria of done (this does not mean bug free, but done in terms of Development satisfying requirements).

Sprint Planning

  • Development and Automation review the technical aspects of the sprint. Details of the review depend on complexity, but can include:

    • Basic Scenario Workflows

    • Scenario users

    • Scenario data

    • Page URLs/API endpoints

    • Page element IDs (locators)

    • API methods

    • Security impact

    • Configuration changes

  • Automation plans sprint testing.

    • Feature files.

    • Decide test type (manual/automated, smoke/sprint/regression…)

    • Schedule, order of testing

  • Development and Automation add estimates to ticket.

  • QA plans UAT.

  • BA responds to questions.

  • Development and Automation re-estimate based on answers and asks more questions as necessary.

Assigned to Developer

  • Development will pull ticket from the Sprint to work on.

    Code (In Progress)

  • Code Ticket (Development)

    • Create unit tests.

    • Code change.

    • Test changes.

    • Commit code.

    • Automated build and test of committed code.

  • Code Ticket Tests (Automation)

    • If necessary, create user and test data load scripts to create new users and test data.

    • Write or update suite and feature config files and Page Object class.

    • Write the steps.

    • Commit code.

    • Automated build and test of committed code.

Code Complete

  • Development will request code review from Development team.

    • Work in progress (WIP) limit - Development cannot have more than 2 open tickets (ticket in the In Progress or Code Complete workflow) at one time.

    • If code review doesn’t happen before end of day, Development will report the code review as a blocker in the next scrum.

  • Development will look for other tickets to code review while waiting for code review before starting next ticket.

    Code Review Complete

  • Once Code Review Complete, the reviewer will assign the ticket to Dev Automation

Deployed to Development

  • Automated nightly build, deployment, smoke test, sprint test of development environment.

  • Automated weekly regression test of development environment.

  • Automation will change the workflow to Deployed to Development and assign back to Developer after the nightly build picks up the changes and deploys it to the development environment.

  • If necessary, automation will arrange walk through with developer.

Development Tested in Development

  • Development will add their test steps to the ticket.

Sprint Tested

  • Review the ticket for completeness:

    • Code Reviewed

    • Install Instructions – in main ticket, not sub tickets

    • Resolution (Defect Tickets)

    • Work Log

  • Attach test review (see Appendix A)

  • Add Update to ticket:

    • Tested in {Dev, Dev2, D1}: Pass or Rejected

  • Attach test assets: screenshots, data files...

  • If the ticket fails sprint testing, change the workflow to Rejected and assign to Developer. The ticket would then follow the standard workflow from Assigned to Developer.

Sprint Review

  • Walkthrough sprint tickets to prepare for sprint demo.

    Sprint Demo

  • Demonstrate to the product owner and stakeholders that sprint tickets meet acceptance criteria.

Sprint Retrospective

  • Discuss what could be improved in the delivery process.

    Regression Sprint

  • PR deployment and smoke test.

  • Change all tickets that are ready for regression testing to

Deployed to QA

  • If the ticket fails QA testing, change the workflow to Rejected and assign to Automation Team. Then automation team will review the reject:

    • Change workflow to Assigned to Developer and assign it to the relevant developer if the ticket needs the attention of a developer

    • Assign to BA if the ticket needs requirements review

    • Assign back to QA with workflow of Deployed to QA if the reject is found to be invalid.

QA Tested

Release

Test Development Workflow

This gives an overview of how test development fits in the overall SDLC.

  • It all starts with a ticket

  • We have 3 types of tickets that may require test automation

    • Feature – new functionality

    • Defect – broken functionality in an existing feature

    • Incident – broken functionality in an existing feature that requires immediate attention

  • Feature

  • Product Management defines new functionality

  • Automation Engineer writes spec that proves the functionality works and attaches to ticket

  • Developer develops the functionality

  • Automation Engineer/Developer writes test steps to implement the test spec

  • The test is ran and any failures would require a loop back to #3.

  • Defect/Incident

  • QA writes a test that proves the functionality doesn’t work

  • Automation Engineer writes spec that proves the functionality is fixed

  • Go to Feature #3.

    Defining Specss

    Overview of how we decided on what to test first when the project started and how we decide what to test going forward.

  • What test to write first?

    • Starting out Ryan made truth tables to try to get adequate coverage of variations in arguments and state for various forms. Ryan would be writing tests the rest of his life and the next if he had to write truth table tests against some of our service layer.

    • Establish benchmark then push the bar up and to the right.

  • SpecFlow Feature

    • Agreement with Biz and QA (This is what we are going to do, OK?)

    • Without dedicated BA it is difficult to have everything formally spec’d

    • QA doesn’t do upfront test plans that are published to the project team

    • When we write feature files we are distilling our understanding into a form that can be understood by QA and the Biz. They may never read it, but we should use it as the basis for Sprint Demos, walkthroughs, prototyping, and them to tickets. The more we make them visible the more they will become a part of the culture.

Spec Tags

Spec Tags (@{Name}) are used to organize tests and enable isolation in test runs. The tags are actually SpecFlow constructs used to control test execution. If you look at the spec below, the tags allow us to categorize the test based on various attributes.

@Req.4.4.2.7 @JMS @Recipients @RecipientDetails @Toolbar @Manual @OTF11533 Scenario: Toolbar color, verify background is correct color. When I open the Recipient Details page Then the toolbar background color should be "#CCCCCC"

  • @Req.4.4.2.7 – this tag signifies the requirement that the spec is based on. This provides traceability of the spec back to the requirement as long as the numbering doesn’t change in the requirement document. This is something we haven’t implemented, but we do have some requirements traceability through the OT ticket tag as tickets are linked to requirements.

  • @JMS @Recipients @RecipientDetails @Toolbar – this tag is like the namespace for the unit of the application the spec defines:

    • @JMS – the application

    • @Recipient – the section of the application

    • @RecipientDetails – the page or control in the application

    • @Toolbar – the sub-page/control in the parent page/control. These tags should not be deeply nested. If nesting occurs, the spec should be broken out into a separate spec with a sub becoming a parent. This helps readability, but also reusability. Deeply nested controls usually has a control somewhere in its hierarchy that is being reused somewhere with possibly the same or similar spec already defined.

  • @Manual – this is a special tag that allows us to define specs that should be manually tested while asking the system to ignore the test in automated test runs.

  • @OTF11533 – this is the ticket associated with the spec. There can be multiple tickets associated with a spec. Ticket tags are only relevant during release development and can optionally be deleted after the ticket is deployed to production. Unless there are an overwhelming amount of ticket tags on a spec they should be kept for future reference.

    With tags in place, we can configure a test run to only test a specific ticket, or page, or any combination of tags. This provides a significant decrease in the amount of time it takes to validate new features and bug fixes.

    Writing Specs

  • SpecFlow Scenario

    • This is where the metal hits the road and we write tests in C# with the unit test framework backing, selenium driving browser through page model, and SpecFlow providing structure.

      Running Tests

  • Local IDE Test

  • Local Command Line Test

  • Remote Automated Test

Test Management

  • KISS it, then KISS it again

  • Managing Environments

  • Managing Test Data

  • Types of Tests (Feature, Smoke, Functional, Regression)

Last updated