How to evaluate the size or effort required for automating user stories in an Agile Scrum methodology

Understanding Test Automation User Stories in Agile Scrum

Mohamed Yaseen
23 min readMar 20, 2023

Utilizing the end user’s perspective, user stories are used to document requirements. The team can use these narratives as a guide for the development process and to make sure that the products they produce meet the needs of the market. User stories for test automation are used to automate specific testing tasks and ensure that testing is completed quickly and efficiently. In order to understand user stories for test automation in Agile Scrum, this section discusses their importance and provides examples of how they can be used in real-world situations.

A test automation user story is a type of user story that focuses on automating a specific test task. These stories generally have the same format as other user stories and contain descriptions of features and functionality being tested, as well as acceptance criteria and requirements. The difference is that test automation user stories focus on automating the testing process rather than delivering new functionality.

For example, a test automation user story might be:
As a user, I want to verify that the login page displays correctly in different browsers and devices so that I can ensure that the application is accessible to all users.

This user story is focused on automating the testing of the login page, rather than on delivering new functionality. The acceptance criteria might include specific browsers and devices that need to be tested, as well as any performance or usability requirements.

Another example of a test automation user story might be:
As a user, I want to verify that the search functionality returns accurate results for different types of queries so that I can be confident that the application is functioning correctly.

This user story is focused on automating the testing of the search functionality, rather than on delivering new functionality. The acceptance criteria might include specific types of queries that need to be tested, as well as any performance or accuracy requirements.

It’s crucial to comprehend user stories for test automation because doing so promotes effective and efficient testing. Your team can focus more time and resources on other project-related tasks by automating some testing tasks. This quickens the development process and aids in the timely and cost-effective delivery of your product.

Additionally, test automation user stories help improve the quality of your product by ensuring that your tests are complete and consistent. Automated tests can be run repeatedly to help identify and fix problems early in the development process. This helps reduce the risk of errors and improve the overall quality of your product.

Finally, test automation user stories help improve team communication and collaboration. By defining specific testing tasks in the form of user stories, the team can ensure everyone is on the same page and working toward the same goals. This helps avoid misunderstandings and improves team efficiency.

Breaking Down Test Automation User Stories into Smaller Tasks

Breaking down test automation user stories into smaller tasks is an important step in ensuring that testing is effective and efficient. By breaking down larger user stories into smaller, more manageable tasks, the team can easily estimate the effort required for each task, assign tasks to team members, and track the completion schedule. In this section, we will discuss the importance of breaking down test automation user stories into smaller tasks and provide examples of how to do this effectively.

When breaking down test automation user stories into smaller tasks, it’s important to keep the following considerations in mind:

  1. Each task should be small enough to complete in a single sprint. This ensures that progress can be tracked and that the team can make adjustments as needed.
  2. Each task should be focused on a specific testing task or action. This makes it easier to estimate effort and assign tasks to team members based on their skills and experience.
  3. Each task should be well-defined and have clear acceptance criteria. This ensures that the team is working towards a clear goal and that testing is consistent and thorough.

For example, let’s take the test automation user story from the previous section:

“As a user, I want to verify that the search functionality returns accurate results for different types of queries so that I can be confident that the application is functioning correctly.”

This user story could be broken down into the following smaller tasks:

  1. Create test cases for different types of queries, such as exact matches, partial matches, and misspelled queries.
  2. Develop test scripts to automate the execution of the test cases.
  3. Set up test data to simulate realistic user scenarios and queries.
  4. Run the automated tests and record the results.
  5. Analyze the test results and identify any issues or bugs.
  6. Collaborate with developers to resolve any issues or bugs found in testing.

Each of these tasks is focused on a specific testing task or action and can be completed within a single sprint. By breaking down the user story into smaller tasks, the team can more easily estimate the effort required for each task, assign tasks to team members based on their skills and experience, and track progress toward completion.

Another example of breaking down a test automation user story into smaller tasks might be:

“As a user, I want to verify that the application is compatible with different screen resolutions and device sizes so that I can ensure that the application is accessible to all users.”

This user story could be broken down into the following smaller tasks:

  1. Identify the specific screen resolutions and device sizes that need to be tested.
  2. Create test cases for each screen resolution and device size.
  3. Develop test scripts to automate the execution of the test cases.
  4. Set up test data to simulate realistic user scenarios for each screen resolution and device size.
  5. Run the automated tests and record the results.
  6. Analyze the test results and identify any issues or bugs.
  7. Collaborate with developers to resolve any issues or bugs found in testing.

Again, each of these tasks is focused on a specific testing task or action and can be completed within a single sprint. By breaking down the user story into smaller tasks, the team can more easily estimate the effort required for each task, assign tasks to team members based on their skills and experience, and track progress toward completion.

In summary, breaking down test automation user stories into smaller tasks is an essential step in ensuring that testing is effective and efficient. By focusing on specific test tasks or actions, the team can more easily estimate effort, assign tasks, and track progress. This can help speed up the development process, improve product quality, and foster team collaboration and communication.

Estimating the Effort Required for Each Task

An essential step in the test automation process is estimating the effort needed for each task. Planning sprints, allocating resources, and ensuring project timeliness can all be aided by accurately estimating effort. Due to the many factors at play, including the complexity of the feature being tested, the quantity of test cases needed, and the level of automation required, it can be difficult to estimate effort for test automation tasks. We will go over how to estimate the effort needed for each task in test automation with examples in this section.

When estimating the effort required for each task, there are several factors to consider:

  1. The complexity of the feature being tested: More complex features will generally require more time and effort to test. This could be due to factors such as the number of workflows, the number of possible inputs and outputs, or the complexity of the business logic involved.
  2. The number of test cases required: The more test cases required, the more time and effort it will take to test the feature. This could be due to factors such as the number of scenarios to be tested, the number of edge cases, or the number of data combinations that need to be tested.
  3. The level of automation needed: The level of automation needed will depend on the nature of the test cases and the resources available. Automated tests are generally faster and more efficient than manual tests, but they require additional effort to create and maintain.

For example, let’s consider the user story from the previous sections:

“As a user, I want to verify that the search functionality returns accurate results for different types of queries so that I can be confident that the application is functioning correctly.”

Breaking down this user story into smaller tasks, we might estimate the effort required for each task as follows:

  1. Create test cases for different types of queries, such as exact matches, partial matches, and misspelled queries. — 4 hours
  2. Develop test scripts to automate the execution of the test cases. — 8 hours
  3. Set up test data to simulate realistic user scenarios and queries. — 2 hours
  4. Run the automated tests and record the results. — 2 hours
  5. Analyze the test results and identify any issues or bugs. — 4 hours
  6. Collaborate with developers to resolve any issues or bugs found in testing. — 4 hours

In this example, we estimated the effort required for each task based on the complexity of the feature being tested, the number of test cases required, and the level of automation needed. The total estimated effort for this user story is 24 hours, which could be completed in a single sprint.

Another example of estimating the effort required for each task might be:

“As a user, I want to verify that the application is compatible with different screen resolutions and device sizes so that I can ensure that the application is accessible to all users.”

Breaking down this user story into smaller tasks, we might estimate the effort required for each task as follows:

  1. Identify the specific screen resolutions and device sizes that need to be tested. — 2 hours
  2. Create test cases for each screen resolution and device size. — 6 hours
  3. Develop test scripts to automate the execution of the test cases. — 12 hours
  4. Set up test data to simulate realistic user scenarios for each screen resolution and device size. — 4 hours
  5. Run the automated tests and record the results. — 4 hours
  6. Analyze the test results and identify any issues or bugs. — 4 hours
  7. Collaborate with developers to resolve any issues or bugs found in testing. — 4 hours

In this example, we estimated the effort required for each task based on the number.

Using Story Points to Estimate Test Automation User Stories

A common estimation method to determine the time needed to complete a user story is story points. Story points serve as a gauge for the complexity, effort, and risk associated with putting a user story into practice. They aid teams in accurately estimating the amount of work needed to complete a user story because they are a relative measure of time rather than an absolute one. We will go over using story points to estimate user stories for test automation with examples in this section.

When using story points to estimate the effort required for a user story, teams assign points to the story based on its complexity, effort, and risk. The higher the story points assigned, the more complex and time-consuming the user story is. The points assigned can be used to measure the team’s capacity and help them plan sprints.

For example, let’s consider the user story we used in the previous sections:

“As a user, I want to verify that the search functionality returns accurate results for different types of queries so that I can be confident that the application is functioning correctly.”

To estimate the story points for this user story, the team might consider the complexity, effort, and risk involved in completing the user story. They might assign the following story points to the user story:

  1. Complexity — Medium — 5 Story Points
  2. Effort — Medium — 5 Story Points
  3. Risk — Low — 2 Story Points

Based on these factors, the total story points assigned for this user story would be 12. This estimate can be used to help the team plan the effort required to complete the user story in a sprint.

Another example of estimating story points for a user story might be:

“As a user, I want to verify that the application is compatible with different screen resolutions and device sizes so that I can ensure that the application is accessible to all users.”

To estimate the story points for this user story, the team might consider the complexity, effort, and risk involved in completing the user story. They might assign the following story points to the user story:

  1. Complexity — High — 8 Story Points
  2. Effort — High — 8 Story Points
  3. Risk — Medium — 5 Story Points

Based on these factors, the total story points assigned for this user story would be 21. This estimate can be used to help the team plan the effort required to complete the user story in a sprint.

Once the team has estimated the story points for the user stories, they can use them to plan sprints and allocate resources. For example, if the team’s velocity is 30 story points per sprint, they might decide to include both of the user stories we have discussed in this article in a single sprint.

In conclusion, using story points to estimate the effort required for test automation user stories can be an effective way for teams to plan sprints and allocate resources. Story points allow teams to estimate the effort required for a user story accurately and can help them plan their capacity accordingly. By considering the complexity, effort, and risk involved in completing a user story, teams can assign appropriate story points and estimate the effort required to complete the user story in a sprint.

Using Time-Based Estimates to Estimate Test Automation User Stories

Another popular method of estimation is time-based estimates. Time-based estimates, as opposed to story points, are an absolute measure of time, with the team estimating how long it will take to finish each user story. When teams can divide the work into smaller tasks with a set duration and have a solid understanding of the effort needed for a user story, time-based estimates are helpful. This section will go over how to estimate test automation user stories using time-based estimates with examples.

To use time-based estimates, the team needs to estimate the amount of time required to complete each task. These estimates are then added up to provide an overall estimate of the effort required for the user story. Time-based estimates can be measured in hours, days, or weeks, depending on the team’s preference and the duration of the sprint.

For example, let’s consider the user story we used in the previous sections:

“As a user, I want to verify that the search functionality returns accurate results for different types of queries so that I can be confident that the application is functioning correctly.”

To estimate the time required for this user story, the team might break down the work into the following tasks:

  1. Analyze the search functionality requirements — 2 hours
  2. Set up the test environment — 1 day
  3. Create test data for different types of queries — 4 hours
  4. Develop and execute test cases — 3 days
  5. Report and verify defects — 1 day

Based on these tasks and their estimated durations, the team might estimate the total time required to complete the user story to be around 6 business days.

Another example of estimating time-based estimates for a user story might be:

“As a user, I want to verify that the application is compatible with different screen resolutions and device sizes so that I can ensure that the application is accessible to all users.”

To estimate the time required for this user story, the team might break down the work into the following tasks:

  1. Analyze the requirements for different screen resolutions and device sizes — 1 day
  2. Set up the test environment — 1 day
  3. Create test data for different screen resolutions and device sizes — 2 days
  4. Develop and execute test cases — 5 days
  5. Report and verify defects — 1 day

Based on these tasks and their estimated durations, the team might estimate the total time required to complete the user story to be around 10 business days.

Once the team has estimated the time required for the user stories, they can use them to plan sprints and allocate resources. For example, if the team has a sprint duration of two weeks, they might decide to include both of the user stories we have discussed in this article in a single sprint.

In conclusion, using time-based estimates to estimate the effort required for test automation user stories can be an effective way for teams to plan sprints and allocate resources. Time-based estimates allow teams to estimate the amount of time required to complete a user story accurately and can help them plan their capacity accordingly. By breaking down the work into smaller tasks with a defined duration, teams can estimate the time required to complete the user story in a sprint.

Reviewing and Revising Estimates for Test Automation User Stories

Reviewing and updating estimates for test automation user stories is crucial. To keep estimates accurate and current, the team must regularly review and revise them. We’ll go over how to review and update user story estimates with examples in this section.

There are several reasons why the team may need to review and revise their estimates for test automation user stories. Some of these reasons include:

  • New information: The team may receive new information about the user story that impacts the original estimates. For example, they may discover that the test environment is more complex than originally anticipated, which could increase the time required to complete the user story.
  • Changes in requirements: The user story may undergo changes in requirements that impact the original estimates. For example, the team may need to test additional functionality that was not originally included in the user story.
  • Learning from past sprints: The team may have completed similar user stories in the past, and they can use this experience to revise their estimates for new user stories.

To illustrate the process of reviewing and revising estimates, let’s consider the following example:

“As a user, I want to verify that the login functionality works correctly for different types of user credentials so that I can be confident that my data is secure.”

The team initially estimated that this user story would take around 3 days to complete. However, after starting work on the user story, they discover that the test environment is more complex than anticipated. The team may need to revise their estimates to reflect this new information.

To revise the estimates, the team might break down the work into smaller tasks and estimate the time required for each task. For example:

  1. Analyze the login functionality requirements — 2 hours
  2. Set up the test environment — 2 days
  3. Create test data for different user credentials — 4 hours
  4. Develop and execute test cases — 4 days
  5. Report and verify defects — 1 day

Based on these revised estimates, the team may now estimate that the user story will take around 7 business days to complete, which is longer than their original estimate of 3 days.

It’s important to note that revising estimates is not always about increasing the time required to complete a user story. Sometimes the team may discover that the user story is less complex than originally estimated and may require less time to complete. In these cases, the team may revise their estimates downward to reflect this new information.

For example, let’s consider the following user story:

“As a user, I want to verify that the application works correctly in different web browsers so that I can use my preferred web browser to access the application.”

The team initially estimated that this user story would take around 10 business days to complete. However, after starting work on the user story, they discover that the web browsers they need to test are more limited than originally anticipated. The team may need to revise their estimates to reflect this new information.

To revise the estimates, the team might break down the work into smaller tasks and estimate the time required for each task. For example:

  1. Analyze the web browser requirements — 2 hours
  2. Set up the test environment — 1 day
  3. Create test data for different web browsers — 2 days
  4. Develop and execute test cases — 3 days
  5. Report and verify defects — 1 day

Based on these revised estimates, the team may now estimate that the user story will take around 6 business days to complete, which is shorter than their original estimate of 10 days.

In conclusion, reviewing and updating user story estimates is a crucial component. For the estimates to remain accurate, the team must periodically review and revise them.

Considering the Value of Test Automation User Stories

The value that the story will bring to the product and the team should be taken into account when estimating test automation user stories. The value of a test automation user story is based on how much it improves the overall quality of the product, how much it lightens the team’s workload, and how much it ultimately saves in terms of time and resources.

To illustrate the importance of considering the value of test automation user stories, let’s consider the following examples:

Example 1:

“As a user, I want to verify that the application loads correctly on different devices so that I can use the application on any device I choose.”

In this example, the team estimates that the user story will take around 5 days to complete. However, upon further review, they discover that the value of this user story is relatively low. The application is not used on many different devices, and the team has already tested it on the most popular devices. Therefore, the team decides to deprioritize this user story and focus on other stories that bring more value to the product.

Example 2:

“As a user, I want to verify that the application works correctly with different screen sizes so that I can use the application on any device I choose.”

In this example, the team estimates that the user story will take around 7 days to complete. Upon review, they discover that the value of this user story is quite high. The application is used on many different screen sizes, and the team has received complaints from users about the application not displaying correctly on certain devices. Therefore, the team decides to prioritize this user story and allocate resources accordingly.

Example 3:

“As a user, I want to verify that the application works correctly with different input methods so that I can use the application with any input device I choose.”

In this example, the team estimates that the user story will take around 10 days to complete. Upon review, they discover that the value of this user story is high. The application is used with many different input methods, and the team has received complaints from users about the application not responding correctly to certain inputs. Furthermore, the team realizes that by automating these tests, they can save a significant amount of time and resources in the long run. Therefore, the team decides to prioritize this user story and allocate resources accordingly.

In conclusion, taking into account the importance of test automation user stories is crucial. The team must assess the value that each user story adds to the team, the product, and both. The team can make the most significant impact on the product’s quality by giving high-value user stories the highest priority and making sure they are using their resources efficiently.

The Importance of Ongoing Estimation in Agile Scrum

The team needs estimation in order to effectively plan and carry out its work. Estimation assists the team in effectively allocating resources and breaking down challenging tasks into smaller, more manageable chunks. But estimation is not a one-time activity that happens at the start of a project. The project’s entire lifecycle depends on this ongoing process. In this piece, we’ll talk about the value of ongoing estimation and provide some examples.

  1. To Stay on Track with the Project Timeline

One of the essential reasons for ongoing estimation in Agile Scrum is to ensure that the team stays on track with the project timeline. The team can measure their progress and compare it with the original estimation. If the actual progress is behind the estimation, then the team can take corrective actions to bring the project back on track. On the other hand, if the team is ahead of the estimation, then they can adjust the plan and allocate more resources to other tasks.

For example, let’s say the team has estimated that it will take two weeks to complete a particular user story. After one week, the team realizes that they are behind the estimation and have only completed 50% of the work. By revising the estimation, the team can make necessary adjustments to ensure that they can complete the task within the original timeline.

2. To Optimize Resource Allocation

Another reason why ongoing estimation is essential in Scrum is that it helps the team to optimize resource allocation. The team can monitor the progress of the project and allocate resources to tasks that require more attention. For example, if the team realizes that a particular user story is taking longer than expected, they can allocate more resources to that story to complete it within the original timeline.

3. To Manage the Product Backlog

To effectively manage the product backlog, estimation is also crucial. Each user story can be estimated by the team, and priorities can be set accordingly. The most important features of the product can be delivered first by completing high-value user stories with lower effort estimates earlier. On the other hand, high-effort user stories with lower value estimates may be deprioritized or divided into more manageable parts.

4. To Facilitate Continuous Improvement

Continuous estimation is essential to Agile Scrum’s ability to promote improvement. After every sprint, the team can evaluate how accurate their estimates were and make any necessary changes to their estimation procedure. The group can evaluate and tweak its estimation accuracy over time.

For example, if the team has consistently overestimated the effort required for user stories, they can analyze the reasons behind it and adjust their estimation process accordingly. They may realize that they need to break down user stories into smaller pieces or improve their understanding of the technologies involved.

To sum up, ongoing estimation is a critical procedure that aids the team in managing the product backlog, maximizing resource allocation, and promoting continuous improvement. Estimation is a process that requires ongoing monitoring and adjustment rather than being a one-time event. The team can make sure they are delivering the most important features of the product within the original timeline by regularly estimating and tracking progress.

Best Practices for Estimating Test Automation User Stories in Agile Scrum

The team may find it difficult to estimate test automation user stories in Scrum. The team can, however, adhere to a few best practices to enhance the precision and effectiveness of the estimation process. This article will discuss some guidelines for estimating user stories for test automation and provide examples.

  1. Involve the Entire Team in the Estimation Process

In Agile Scrum, the entire team should be involved in the estimation process. This includes developers, testers, and other stakeholders. Having multiple perspectives helps to ensure that the estimation is accurate and that all aspects of the user story are considered. By involving the entire team in the estimation process, everyone can understand the scope of the user story and the effort required to complete it.

For example, a developer may understand the effort required to write code, while a tester may understand the effort required to create and execute test cases. By involving both developers and testers in the estimation process, the team can get a more accurate estimation of the user story’s effort.

2. Break Down User Stories into Smaller Tasks

One crucial best practice is to divide user stories into smaller tasks. Larger, more complex tasks are harder to estimate and manage than smaller, simpler tasks. The team will be better able to understand the user story’s scope and make sure that all of its components are taken into account during the estimation process if user stories are broken down into smaller tasks.

For example, a user story to automate login functionality may be broken down into smaller tasks such as creating test cases for valid and invalid login scenarios, creating test data, and implementing automation scripts for each test case.

3. Use Relative Sizing and Story Points

User story estimates are frequently made using relative sizing and story points. Story points are a comparative measure of effort that enable the team to calculate the amount of time needed to complete a user story in comparison to other user stories. The team can stay away from the error of estimating user story effort in absolute terms by using story points instead of this mental fallacy.

For example, the team can use the Fibonacci sequence to assign story points to a user story. A user story that is twice as complex as another user story would be assigned twice as many story points.

4. Consider the Risk and Complexity of the User Story

When estimating test automation user stories, it is essential to consider the risk and complexity of the user story. A user story that involves a complex technology or a new feature may require more effort than a user story that involves a familiar technology. Similarly, a user story that has a high risk of failure may require more effort than a user story that has a low risk of failure.

For example, a user story to automate payment processing may have a higher risk of failure than a user story to automate a login process. The team should consider this risk when estimating the effort required for each user story.

5. Review and Revise Estimates Regularly

An important best practice is to review and revise estimates frequently. The amount of work necessary to finish a user story may change as the team works through the project because they might learn new information or run into unforeseen difficulties. The team can make sure that the estimates are accurate and current by regularly reviewing and revising them.

For example, the team may have estimated that a user story would take two weeks to complete. However, after one week, the team may realize that they are behind schedule and need to revise the estimate to ensure that they can complete the user story within the original timeline.

It is clear from a careful analysis of the user story’s scope, risk, and complexity that estimation of test automation user stories is necessary. by including every member of the team in the estimation process, breaking down user stories into smaller tasks, and applying relative sizing.

Common Pitfalls to Avoid When Estimating Test Automation User Stories in Agile Scrum

The team may find it difficult to estimate test automation user stories, and there are a few common pitfalls that they should stay clear of to ensure accurate and effective estimation. In this article, we’ll go over some typical mistakes to keep away from when estimating test automation user stories, with examples.

One common pitfall when estimating test automation user stories in Scrum is focusing only on the development efforts. While development effort is essential, it is not the only factor that should be considered when estimating the effort required to complete a user story.

For example, a user story to automate payment processing may require not only development effort but also effort in creating test data, creating and executing test cases, and implementing automation scripts. Focusing only on development efforts can lead to inaccurate estimation.

2. Ignoring Risk and Complexity

Another frequent mistake when estimating test automation user stories is to ignore risk and complexity. It may take more work to complete user stories with a higher risk of failure or complexity than user stories with a lower risk. Neglecting these elements may result in erroneous estimation and project delays.

For example, a user story to automate a critical feature may have a higher risk of failure than a user story to automate a non-critical feature. Ignoring this risk can lead to inaccurate estimation and delays in completing the project.

3. Estimating in Absolute Terms

Absolute estimation is a common mistake. It can be inaccurate to estimate the time needed to complete a user story in absolute terms because it ignores the relative complexity of other user stories.

For example, estimating that a user story will take 40 hours to complete may not be accurate if other user stories are estimated to take 20 hours to complete. Using relative sizing and story points can help to avoid this pitfall.

4. Not Reviewing and Revising Estimates Regularly

Another frequent blunder is failing to regularly review and revise estimates. The amount of work necessary to finish a user story may change as the project moves forward due to new information or unforeseen difficulties that the team may encounter. Regular review and revision of estimates can prevent inaccurate estimations and delays in project completion.

For example, the team may have estimated that a user story will take two weeks to complete. However, after one week, the team may realize that they are behind schedule and need to revise the estimate to ensure that they can complete the user story within the original timeline.

5. Not Considering the Value of User Stories

User story value is often overlooked, which is a common mistake. Estimating the time needed to complete a user story without taking into account its value can result in inefficient resource use and project completion delays.

For example, a user story to automate a non-critical feature may require significant effort, but the value it provides to the project may not be worth the effort required to complete it. The team should consider the value of each user story when estimating the effort required to complete it.

The value of user stories, relative sizing and story points, development effort, risk, and complexity must all be carefully taken into account when estimating test automation user stories in Agile Scrum. The team can ensure accurate and effective estimation by avoiding common pitfalls like concentrating only on development efforts, disregarding risk and complexity, estimating in absolute terms, not regularly reviewing and revising estimates, and failing to take user stories into account.

--

--

Mohamed Yaseen

Experienced QA Automation Lead with expertise in test automation, frameworks, and tools. Ensures quality software with attention to detail and analytical skills