ISTQB CTFL Syllabus Uncovered: Your Ultimate Guide Vol2

Mohamed Yaseen
13 min readOct 12, 2023

--

2.1.Testing in the Context of a Software Development Lifecycle

A software development lifecycle (SDLC) model is similar to a roadmap for developing software, and there are several ways to follow that roadmap, as well as particular approaches to assist you along the way.

2.1.1. Impact of the Software Development Lifecycle on Testing

The SDLC method you choose affects various aspects of testing, such as when and how tests are conducted, the level of detail in test documentation, the choice of testing methods, the extent of automation, and the roles of testers.

In traditional, sequential development models, testing often starts later in the process because the code is created in the later phases. This delays dynamic testing, where you test the actual software, until later stages.

In some iterative and incremental development models, testing can happen in each iteration, both in a static (without running the software) and dynamic (running the software) way. This is because each iteration delivers a working part of the software, and frequent testing and feedback are essential.

In Agile software development, where change is expected, less documentation and more test automation are favored. Testing is often done using experience-based techniques, which don’t require extensive planning beforehand.

In simpler terms, the way you test software depends on how the software is being developed. In some methods, testing can happen at various stages, while in others, it may be delayed until later. Agile projects focus on adaptability and use less documentation, relying on automation and experienced testers.

2.1.2. Software Development Lifecycle and Good Testing Practices

  • Corresponding Test Activities: In any software development process, there should be a testing phase that matches each development activity. This ensures that all parts of the software development are subject to quality control through testing.
  • Specific Test Objectives: Different levels of testing have specific objectives. This allows testing to be thorough without unnecessary duplication or redundancy. Each testing level has a unique purpose.
  • Early Test Planning: Test analysis and design for a particular testing level should start during the corresponding development phase of the SDLC. This follows the principle of “early testing,” which means testing should begin as soon as possible in the development process.
  • Early Involvement of Testers: Testers should be involved in reviewing various work products, such as documentation, as soon as drafts are available. This enables early testing and defect detection, supporting the “shift-left” strategy, which means identifying and addressing issues early in the development cycle.

In simple terms, these best practices emphasize that testing should be integrated into the development process from the beginning and tailored to the specific objectives of each testing level. This early and ongoing testing involvement by testers helps ensure software quality and identifies issues as soon as possible.

2.1.3. Testing as a Driver for Software Development

TDD (Test-Driven Development), ATDD (Acceptance Test-Driven Development), and BDD (Behavior-Driven Development) are all development approaches that employ tests to drive the development process. They all stress early testing and have a “shift-left” methodology, which means that tests are developed before producing code. These approaches are iterative. Here’s a quick rundown of each:

  • Test-Driven Development (TDD):
  • In TDD, the coding process is directed by writing test cases rather than extensive software design.
  • Developers write tests first, specifying what the code should do.
  • Then, they write code to meet the requirements of the tests.
  • Finally, both the code and the tests are refined or refactored.
  • TDD ensures that the code works correctly based on the specified tests.
  • Acceptance Test-Driven Development (ATDD):
  • ATDD derives tests from acceptance criteria during the system design process.
  • Tests are written before the corresponding part of the application is developed.
  • These tests serve as a way to ensure that the software meets the specified acceptance criteria.
  • Behavior-Driven Development (BDD):
  • BDD focuses on expressing the desired behavior of an application using test cases written in a simple, natural language that stakeholders can easily understand.
  • Typically, BDD tests follow a format like “Given/When/Then” to define the scenario, action, and expected outcome.
  • These test cases can be automatically translated into executable tests, helping to ensure that the software behaves as intended.

All of these approaches share the common practice of creating and using tests to guide the development process. The tests can also be maintained as automated tests to ensure the ongoing quality of the code through adaptations and refactoring.

2.1.4. DevOps and Testing

DevOps is a company culture that combines development and operations teams for common goals. It promotes team autonomy, quick feedback, integrated tools, and technological methods like continuous integration and continuous delivery. This enables teams to produce, test, and deploy high-quality code more quickly, offering various testing advantages.

  • Fast Feedback: DevOps provides quick feedback on code quality and identifies any adverse effects on existing code when changes are made.
  • Shift-Left Testing: CI encourages a “shift-left” approach in testing, where developers submit high-quality code along with component tests and static analysis, promoting early testing.
  • Automated Processes: Automation through CI/CD streamlines the establishment of stable test environments, making testing more efficient.
  • Focus on Non-Functional Quality: DevOps broadens the view to include non-functional quality characteristics like performance and reliability.
  • Reduction in Manual Testing: Automation in the delivery pipeline reduces the need for repetitive manual testing, saving time and effort.
  • Risk Mitigation: Automated regression tests at scale help minimize the risk of regression issues.

However, DevOps also comes with its challenges:

  • Establishing the Delivery Pipeline: Creating and setting up the DevOps delivery pipeline requires careful planning and implementation.
  • Introducing and Maintaining CI/CD Tools: The adoption and maintenance of CI/CD tools can be complex.
  • Resource Requirements for Test Automation: Test automation may require additional resources and could be challenging to establish and maintain.
  • Role of Manual Testing: Despite high levels of automation, manual testing, particularly from a user’s perspective, remains necessary.

In summary, DevOps is about fostering collaboration between development and operations, emphasizing automation, and promoting rapid, high-quality software delivery. While it offers many benefits for testing, it also presents some implementation challenges that organizations need to address.

2.1.5. Shift-Left Approach

The principle of early testing, often referred to as “shift-left,” means conducting testing earlier in the Software Development Life Cycle (SDLC). Shift-left emphasizes that testing shouldn’t be delayed until the later stages of development, but it doesn’t imply neglecting testing in those later stages. Here are some practices to achieve a shift-left approach in testing:

  • Reviewing Specifications: Examine project specifications with a testing perspective. This review can identify potential issues like ambiguities, incompleteness, and inconsistencies.
  • Writing Test Cases Early: Create test cases before writing the actual code. Run these test cases in a test environment as the code is being implemented.
  • Using Continuous Integration (CI) and Continuous Delivery (CD): Employ CI and, ideally, CD, which provide rapid feedback and include automated component tests. These tests run when code is submitted to the repository, ensuring that code changes don’t break existing functionality.
  • Static Code Analysis: Analyze the source code statically, either before dynamic testing or as part of an automated process. This helps catch code issues early.
  • Non-Functional Testing: Start non-functional testing, such as performance or reliability testing, at the component test level when possible. Typically, these tests occur later in the SDLC when a complete system and a representative test environment are available.

While adopting a shift-left approach may require additional training, effort, or costs earlier in the development process, it is expected to save effort and costs later. However, stakeholders must be convinced and support this concept for it to be successful.

In simple terms, shift-left testing means testing earlier in the software development process. This involves reviewing requirements, writing tests before coding, using automation tools, and focusing on non-functional testing as early as possible. While it may require more initial effort, it ultimately leads to better software quality and cost savings. Having the support of stakeholders is essential for the success of this approach.

2.1.6. Retrospectives and Process Improvement

Retrospectives, also known as post-project meetings, are gatherings that typically occur at the end of a project, iteration, release milestone, or when needed. The timing and structure of these meetings depend on the specific Software Development Life Cycle (SDLC) being followed. In retrospectives, participants, which can include testers, developers, architects, product owners, and business analysts, discuss the following:

  • Successes to Retain: They identify what went well in the project and should be preserved for future efforts.
  • Improvements: They identify aspects that didn’t work as expected and could be enhanced or fixed.
  • Implementing Changes: They discuss how to incorporate these improvements and retain the successful practices in future projects.

The outcomes of these retrospectives are documented and typically included in the test completion report. Retrospectives are crucial for achieving continuous improvement, and it’s essential to ensure that the suggested improvements are acted upon.

In the context of testing, retrospectives offer several benefits, such as:

  • Enhanced Test Effectiveness and Efficiency: By implementing process improvement suggestions, testing becomes more effective and efficient.
  • Improved Quality of Testware: Collaboratively reviewing the test processes helps enhance the quality of test-related materials and practices.
  • Team Bonding and Learning: Participants have the opportunity to voice issues and propose improvements, fostering team bonding and learning.
  • Enhanced Quality of the Test Basis: Deficiencies in requirements, such as their scope and quality, can be identified and addressed, resulting in a better foundation for testing.
  • Better Collaboration between Development and Testing: Regularly reviewing and optimizing collaboration fosters improved cooperation between development and testing teams.

In simpler terms, retrospectives are meetings held at the end of a project or phase to reflect on what went well, what didn’t, and how to make things better in the future. They bring various team members together to discuss and learn from their experiences. For testing, this process can lead to more effective testing, better test quality, team cohesion, improved requirements, and stronger collaboration between developers and testers.

2.2. Test Levels and Test Types

Test Situations relate to systematized groups of test conditioning that are carried out together. Each test position represents a specific stage of the software development process, from individual factors to complete systems or, in certain cases, indeed systems of systems. These test situations are nearly tied to other conditioning within the Software Development Life Cycle( SDLC). In traditional, successional SDLC models, the exit criteria of one test position frequently serve as the entry criteria for the coming position. Still, in some iterative models, this may not always be the case, as development conditioning may gauge across multiple test situations. also, these test situations can occasionally lap in terms of timing. Test types, on the other hand, are orders of test conditioning that are related to specific quality characteristics of the software. utmost of these test conditioning can be performed at each test position. Test types help concentrate on colorful aspects of software quality, similar to functionality, performance, security, and more, throughout different stages of development. In simple terms, test situations are like different stages in the software development process where testing is conducted, and they can be connected successionally or lap in some cases. Test types are specific orders of tests that assess different aspects of software quality, and they can be applied to colorful test situations to ensure the software meets the required quality norms.

2.2.1. Test Levels

Software testing includes various levels to ensure the quality of the software. These levels are defined by specific characteristics, objectives, and the entities being tested. Here’s a simplified summary of the different testing levels and their key attributes:

  • Component Testing (Unit Testing):
  • Focus: Testing individual components (small parts) of the software in isolation.
  • Performed by: Developers in their development environments.
  • Tools: Often uses test harnesses or unit test frameworks.
  • Component Integration Testing (Unit Integration Testing):
  • Focus: Testing the interactions and interfaces between components.
  • Dependent on: Integration strategy (e.g., bottom-up, top-down, or big-bang).
  • System Testing:
  • Focus: Evaluating the overall behavior and capabilities of the entire system or product.
  • Includes: Functional testing of end-to-end tasks and non-functional testing of quality characteristics (e.g., usability).
  • Performed by: This may involve an independent test team.
  • System Integration Testing:
  • Focus: Testing the interfaces between the system under test and other systems or external services.
  • Requires: Suitable test environments similar to the operational environment.
  • Acceptance Testing:
  • Focus: Validating the readiness of the system for deployment and ensuring it fulfills the user’s business needs.
  • Main Forms: User acceptance testing (UAT), operational acceptance testing, contractual and regulatory acceptance testing, alpha testing, and beta testing.
  • Ideally Performed by: Intended users.

Attributes used to distinguish these test levels and prevent overlapping of activities include:

  • Test Object: What is being tested (components, interfaces, the entire system, etc.)
  • Test Objectives: The specific goals and purposes of the testing level.
  • Test Basis: The information and documents that testing is based on.
  • Defects and Failures: How issues and failures are handled and reported.
  • Approach and Responsibilities: The methods and individuals or teams responsible for conducting the testing.

In essence, these test levels help ensure that different aspects of the software are examined thoroughly, from small components to the complete system, with specific goals and methods associated with each level.

2.2.2. Test Types

Various types of tests can be used in software projects. In this context, four key test types are discussed:

  • Functional Testing:
  • Focus: Evaluating what the software component or system should do, and assessing its functions.
  • Objective: Checking functional completeness, correctness, and appropriateness.
  • Example: Verifying that a login function allows users to access their accounts.
  • Non-Functional Testing:
  • Focus: Assessing attributes beyond the functionality, such as performance, compatibility, usability, security, and more.
  • Objective: Evaluating non-functional software quality characteristics.
  • Examples: Checking how fast a webpage loads (performance) or how user-friendly the user interface is (usability).
  • Black-Box Testing:
  • Approach: Specification-based, deriving tests from external documentation.
  • Objective: Verifying the system’s behavior against its specifications.
  • Example: Testing a software application based on the requirements and features described in a document.
  • White-Box Testing:
  • Approach: Structure-based, deriving tests from the software’s internal structure, such as code, architecture, and data flows.
  • Objective: Ensuring that the underlying system structure is adequately covered by tests.
  • Example: Testing individual code segments or data flow paths within the software.

It’s important to note that all four of these test types can be applied to various test levels within the software development process. However, the specific focus and techniques used for each test type may differ at each level. For instance, while functional testing aims to ensure the software performs its intended functions, non-functional testing is concerned with aspects like performance and security. Starting non-functional testing early in the project is advisable to avoid late-stage issues that could jeopardize project success. Depending on the test type, specific test environments may be required, such as a usability lab for usability testing.

In summary, these four test types provide different approaches to assess the quality and behavior of software, considering both functional and non-functional aspects. They can be applied at various stages of development, and different techniques are used to derive test conditions and test cases for each test type.

2.2.3. Confirmation Testing and Regression Testing

When changes are made to a software component or system, they are typically done to add new features or fix defects. In such cases, testing plays a crucial role, which involves two important aspects: confirmation testing and regression testing.

  • Confirmation Testing:
  • Purpose: To confirm that a specific defect has been successfully fixed.
  • Approach: Depending on the risk involved, you can test the fixed version of the software in a few ways:
  • Re-running all test cases that previously failed due to the defect.
  • Adding new tests to cover any changes made to fix the defect.
  • Efficiency Consideration: When time or resources are limited, confirmation testing might be limited to verifying that the steps that originally caused the defect now work without causing the failure.
  • Regression Testing:
  • Purpose: To ensure that making changes, including defect fixes, has not introduced new defects or adverse consequences.
  • Scope: This testing can extend beyond the test object itself, potentially affecting other components in the same system or even connected systems.
  • Impact Analysis: Before conducting regression testing, it’s advisable to perform an impact analysis. This analysis helps identify the parts of the software that could potentially be affected by the changes.
  • Automation: Regression testing is often automated because it is run frequently, especially in iterative development or DevOps environments. It’s recommended to start automation early in the project, and it’s often used in Continuous Integration (CI) practices to maintain software quality.

In summary, confirmation testing ensures that a specific defect is fixed, and it involves retesting the affected part of the software. Regression testing goes a step further to check if the changes, including defect fixes, have caused any unintended issues or adverse effects in the software, and it may span various levels and even the software’s environment. Automated regression testing is a common practice, especially in agile and DevOps settings. Both confirmation and regression testing are essential on all test levels if defects are fixed or changes are made at those levels.

2.3. Maintenance Testing

Software maintenance involves various activities to keep a software system in good shape. These activities can fall into different categories, including corrective maintenance, adaptive maintenance to handle environmental changes, and improvements for performance and maintainability. Maintenance can involve planned releases and unplanned releases, often referred to as hotfixes. Impact analysis is a valuable step that can help assess the potential consequences of changes before they are made. Testing plays a crucial role in maintenance, including evaluating the success of change implementation and checking for regressions in unchanged parts of the system.
Key points about maintenance testing:
* Scope of Maintenance Testing: The scope of maintenance testing depends on factors like the level of risk associated with the change, the size of the existing system, and the magnitude of the change.
* Triggers for Maintenance Testing:
* Modifications: These include planned enhancements, corrective changes, and hotfixes.
* Upgrades or Migrations: When the operational environment changes, like moving to a new platform, testing is required for both the new environment and the changed software.
* Retirement: When a system reaches the end of its life, testing may be needed for data archiving and procedures to restore and retrieve archived data if necessary.
In summary, software maintenance encompasses various activities, including making changes to the software for different reasons, adapting to environmental changes, and ensuring the system’s performance and maintainability. The impact analysis helps assess potential consequences before making changes. Maintenance testing plays a crucial role in ensuring the success of these changes and preventing unintended issues, and its scope depends on factors like risk and the size of the system. Maintenance testing is triggered by modifications, upgrades, migrations, and the retirement of software systems.

--

--

Mohamed Yaseen

Experienced QA Automation Lead with expertise in test automation, frameworks, and tools. Ensures quality software with attention to detail and analytical skills