The Testing, Tools, and Frameworks Zone encapsulates one of the final stages of the SDLC as it ensures that your application and/or environment is ready for deployment. From walking you through the tools and frameworks tailored to your specific development needs to leveraging testing practices to evaluate and verify that your product or application does what it is required to do, this Zone covers everything you need to set yourself up for success.
Selenium Versus Karate: A Concrete Comparative Approach
Application of Machine Learning Methods To Search for Rail Defects (Part 2)
This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report Artificial intelligence (AI) has revolutionized the realm of software testing, introducing new possibilities and efficiencies. The demand for faster, more reliable, and efficient testing processes has grown exponentially with the increasing complexity of modern applications. To address these challenges, AI has emerged as a game-changing force, revolutionizing the field of automated software testing. By leveraging AI algorithms, machine learning (ML), and advanced analytics, software testing has undergone a remarkable transformation, enabling organizations to achieve unprecedented levels of speed, accuracy, and coverage in their testing endeavors. This article delves into the profound impact of AI on automated software testing, exploring its capabilities, benefits, and the potential it holds for the future of software quality assurance. An Overview of AI in Testing This introduction aims to shed light on the role of AI in software testing, focusing on key aspects that drive its transformative impact. Figure 1: AI in testing Elastically Scale Functional, Load, and Performance Tests AI-powered testing solutions enable the effortless allocation of testing resources, ensuring optimal utilization and adaptability to varying workloads. This scalability ensures comprehensive testing coverage while maintaining efficiency. AI-Powered Predictive Bots AI-powered predictive bots are a significant advancement in software testing. Bots leverage ML algorithms to analyze historical data, patterns, and trends, enabling them to make informed predictions about potential defects or high-risk areas. By proactively identifying potential issues, predictive bots contribute to more effective and efficient testing processes. Automatic Update of Test Cases With AI algorithms monitoring the application and its changes, test cases can be dynamically updated to reflect modifications in the software. This adaptability reduces the effort required for test maintenance and ensures that the test suite remains relevant and effective over time. AI-Powered Analytics of Test Automation Data By analyzing vast amounts of testing data, AI-powered analytical tools can identify patterns, trends, and anomalies, providing valuable information to enhance testing strategies and optimize testing efforts. This data-driven approach empowers testing teams to make informed decisions and uncover hidden patterns that traditional methods might overlook. Visual Locators Visual locators, a type of AI application in software testing, focus on visual elements such as user interfaces and graphical components. AI algorithms can analyze screenshots and images, enabling accurate identification of and interaction with visual elements during automated testing. This capability enhances the reliability and accuracy of visual testing, ensuring a seamless user experience. Self-Healing Tests AI algorithms continuously monitor test execution, analyzing results and detecting failures or inconsistencies. When issues arise, self-healing mechanisms automatically attempt to resolve the problem, adjusting the test environment or configuration. This intelligent resilience minimizes disruptions and optimizes the overall testing process. What Is AI-Augmented Software Testing? AI-augmented software testing refers to the utilization of AI techniques — such as ML, natural language processing, and data analytics — to enhance and optimize the entire software testing lifecycle. It involves automating test case generation, intelligent test prioritization, anomaly detection, predictive analysis, and adaptive testing, among other tasks. By harnessing the power of AI, organizations can improve test coverage, detect defects more efficiently, reduce manual effort, and ultimately deliver high-quality software with greater speed and accuracy. Benefits of AI-Powered Automated Testing AI-powered software testing offers a plethora of benefits that revolutionize the testing landscape. One significant advantage lies in its codeless nature, thus eliminating the need to memorize intricate syntax. Embracing simplicity, it empowers users to effortlessly create testing processes through intuitive drag-and-drop interfaces. Scalability becomes a reality as the workload can be efficiently distributed among multiple workstations, ensuring efficient utilization of resources. The cost-saving aspect is remarkable as minimal human intervention is required, resulting in substantial reductions in workforce expenses. With tasks executed by intelligent bots, accuracy reaches unprecedented heights, minimizing the risk of human errors. Furthermore, this automated approach amplifies productivity, enabling testers to achieve exceptional output levels. Irrespective of the software type — be it a web-based desktop application or mobile application — the flexibility of AI-powered testing seamlessly adapts to diverse environments, revolutionizing the testing realm altogether. Figure 2: Benefits of AI for test automation Mitigating the Challenges of AI-Powered Automated Testing AI-powered automated testing has revolutionized the software testing landscape, but it is not without its challenges. One of the primary hurdles is the need for high-quality training data. AI algorithms rely heavily on diverse and representative data to perform effectively. Therefore, organizations must invest time and effort in curating comprehensive and relevant datasets that encompass various scenarios, edge cases, and potential failures. Another challenge lies in the interpretability of AI models. Understanding why and how AI algorithms make specific decisions can be critical for gaining trust and ensuring accurate results. Addressing this challenge requires implementing techniques such as explainable AI, model auditing, and transparency. Furthermore, the dynamic nature of software environments poses a challenge in maintaining AI models' relevance and accuracy. Continuous monitoring, retraining, and adaptation of AI models become crucial to keeping pace with evolving software systems. Additionally, ethical considerations, data privacy, and bias mitigation should be diligently addressed to maintain fairness and accountability in AI-powered automated testing. AI models used in testing can sometimes produce false positives (incorrectly flagging a non-defect as a defect) or false negatives (failing to identify an actual defect). Balancing precision and recall of AI models is important to minimize false results. AI models can exhibit biases and may struggle to generalize new or uncommon scenarios. Adequate training and validation of AI models are necessary to mitigate biases and ensure their effectiveness across diverse testing scenarios. Human intervention plays a critical role in designing test suites by leveraging their domain knowledge and insights. They can identify critical test cases, edge cases, and scenarios that require human intuition or creativity, while leveraging AI to handle repetitive or computationally intensive tasks. Continuous improvement would be possible by encouraging a feedback loop between human testers and AI systems. Human experts can provide feedback on the accuracy and relevance of AI-generated test cases or predictions, helping improve the performance and adaptability of AI models. Human testers should play a role in the verification and validation of AI models, ensuring that they align with the intended objectives and requirements. They can evaluate the effectiveness, robustness, and limitations of AI models in specific testing contexts. AI-Driven Testing Approaches AI-driven testing approaches have ushered in a new era in software quality assurance, revolutionizing traditional testing methodologies. By harnessing the power of artificial intelligence, these innovative approaches optimize and enhance various aspects of testing, including test coverage, efficiency, accuracy, and adaptability. This section explores the key AI-driven testing approaches, including differential testing, visual testing, declarative testing, and self-healing automation. These techniques leverage AI algorithms and advanced analytics to elevate the effectiveness and efficiency of software testing, ensuring higher-quality applications that meet the demands of the rapidly evolving digital landscape: Differential testing assesses discrepancies between application versions and builds, categorizes the variances, and utilizes feedback to enhance the classification process through continuous learning. Visual testing utilizes image-based learning and screen comparisons to assess the visual aspects and user experience of an application, thereby ensuring the integrity of its look and feel. Declarative testing expresses the intention of a test using a natural or domain-specific language, allowing the system to autonomously determine the most appropriate approach to execute the test. Self-healing automation automatically rectifies element selection in tests when there are modifications to the user interface (UI), ensuring the continuity of reliable test execution. Key Considerations for Harnessing AI for Software Testing Many contemporary test automation tools infused with AI provide support for open-source test automation frameworks such as Selenium and Appium. AI-powered automated software testing encompasses essential features such as auto-code generation and the integration of exploratory testing techniques. Open-Source AI Tools To Test Software When selecting an open-source testing tool, it is essential to consider several factors. Firstly, it is crucial to verify that the tool is actively maintained and supported. Additionally, it is critical to assess whether the tool aligns with the skill set of the team. Furthermore, it is important to evaluate the features, benefits, and challenges presented by the tool to ensure they are in line with your specific testing requirements and organizational objectives. A few popular open-source options include, but are not limited to: Carina – AI-driven, free forever, scriptless approach to automate functional, performance, visual, and compatibility tests TestProject – Offered the industry's first free Appium AI tools in 2021, expanding upon the AI tools for Selenium that they had previously introduced in 2020 for self-healing technology Cerberus Testing – A low-code and scalable test automation solution that offers a self-healing feature called Erratum and has a forever-free plan Designing Automated Tests With AI and Self-Testing AI has made significant strides in transforming the landscape of automated testing, offering a range of techniques and applications that revolutionize software quality assurance. Some of the prominent techniques and algorithms are provided in the tables below, along with the purposes they serve: KEY TECHNIQUES AND APPLICATIONS OF AI IN AUTOMATED TESTING Key Technique Applications Machine learning Analyze large volumes of testing data, identify patterns, and make predictions for test optimization, anomaly detection, and test case generation Natural language processing Facilitate the creation of intelligent chatbots, voice-based testing interfaces, and natural language test case generation Computer vision Analyze image and visual data in areas such as visual testing, UI testing, and defect detection Reinforcement learning Optimize test execution strategies, generate adaptive test scripts, and dynamically adjust test scenarios based on feedback from the system under test Table 1 KEY ALGORITHMS USED FOR AI-POWERED AUTOMATED TESTING Algorithm Purpose Applications Clustering algorithms Segmentation k-means and hierarchical clustering are used to group similar test cases, identify patterns, and detect anomalies Sequence generation models: recurrent neural networks or transformers Text classification and sequence prediction Trained to generate sequences such as test scripts or sequences of user interactions for log analysis Bayesian networks Dependencies and relationships between variables Test coverage analysis, defect prediction, and risk assessment Convolutional neural networks Image analysis Visual testing Evolutionary algorithms: genetic algorithms Natural selection Optimize test case generation, test suite prioritization, and test execution strategies by applying genetic operators like mutation and crossover on existing test cases to create new variants, which are then evaluated based on fitness criteria Decision trees, random forests, support vector machines, and neural networks Classification Classification of software components Variational autoencoders and generative adversarial networks Generative AI Used to generate new test cases that cover different scenarios or edge cases by test data generation, creating synthetic data that resembles real-world scenarios Table 2 Real-World Examples of AI-Powered Automated Testing AI-powered visual testing platforms perform automated visual validation of web and mobile applications. They use computer vision algorithms to compare screenshots and identify visual discrepancies, enabling efficient visual testing across multiple platforms and devices. NLP and ML are combined to generate test cases from plain English descriptions. They automatically execute these test cases, detect bugs, and provide actionable insights to improve software quality. Self-healing capabilities are also provided by automatically adapting test cases to changes in the application's UI, improving test maintenance efficiency. Quantum AI-Powered Automated Testing: The Road Ahead The future of quantum AI-powered automated software testing holds great potential for transforming the way testing is conducted. Figure 3: Transition of automated testing from AI to Quantum AI Quantum computing's ability to handle complex optimization problems can significantly improve test case generation, test suite optimization, and resource allocation in automated testing. Quantum ML algorithms can enable more sophisticated and accurate models for anomaly detection, regression testing, and predictive analytics. Quantum computing's ability to perform parallel computations can greatly accelerate the execution of complex test scenarios and large-scale test suites. Quantum algorithms can help enhance security testing by efficiently simulating and analyzing cryptographic algorithms and protocols. Quantum simulation capabilities can be leveraged to model and simulate complex systems, enabling more realistic and comprehensive testing of software applications in various domains, such as finance, healthcare, and transportation. Parting Thoughts AI has significantly revolutionized the traditional landscape of testing, enhancing the effectiveness, efficiency, and reliability of software quality assurance processes. AI-driven techniques such as ML, anomaly detection, NLP, and intelligent test prioritization have enabled organizations to achieve higher test coverage, early defect detection, streamlined test script creation, and adaptive test maintenance. The integration of AI in automated testing not only accelerates the testing process but also improves overall software quality, leading to enhanced customer satisfaction and reduced time to market. As AI continues to evolve and mature, it holds immense potential for further advancements in automated testing, paving the way for a future where AI-driven approaches become the norm in ensuring the delivery of robust, high-quality software applications. Embracing the power of AI in automated testing is not only a strategic imperative but also a competitive advantage for organizations looking to thrive in today's rapidly evolving technological landscape. This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report
In the fast-paced world of software development, projects need agility to respond quickly to market changes, which is only possible when the organizations and project management improve efficiency, reduce waste, and deliver value to their customers fastest. A methodology that has become very popular in this digital era is the Agile methodology. Agile strives to reduce efforts, yet it delivers high-quality features or value in each build. Within the Agile spectrum, there exists a concept known as "Pure Agile Methodology," often referred to simply as "Pure Agile," which is a refined and uncompromising approach to Agile project management. It adheres strictly to the core values of the Agile Manifesto. Adherence to the Agile Manifesto includes favoring individuals and interactions over processes and tools, working solutions over comprehensive documentation, customer collaboration over contract negotiation, and responding to change over following a plan. Though agile is being used worldwide for most software projects, the way it is implemented is not always pure agile. We must be able to discern the Pure Agile if the way it is implemented is seamless. Hence, that is also known as "Agile in its truest form." Within the Agile framework, Agile Testing plays a pivotal role in ensuring that software products are not only developed faster but also meet high-quality standards. Agile testing is a new-age approach to software testing to keep pace with the agile software development process. Agile testing is an iterative and incremental that applies the principles of agile software development to the practice of testing. It goes beyond traditional testing methods, becoming a collaborative and continuous effort throughout the project lifecycle. Agile testing is a collaborative, team-oriented process. Unlike traditional software testing, Agile testing tests systems in small increments, often developing tests before writing the code or feature. Below are the ways it is much different than traditional testing: Early involvement: Agile testing applies a 'test-first' approach. Testers are involved in the project from the beginning itself, i.e., requirements discussions, user story creation, and sprint planning. This assures that testing considerations are taken into account from the outset. Integration: In Agile testing, activities are performed with development simultaneously rather than driving them separately in the testing phase. The biggest advantage of having Agile testing is defects are detected and addressed at an early stage, which eventually helps to reduce the cost, time, and effort. User-centric: Agile testing has the most preference and importance for customer feedback, and the testing effort gets aligned as per the feedback given by the customer. Feedback-driven: Agile testing has the significance of continuous feedback. This enduring feedback and communication ensures that everyone is aligned on project goals and quality standards. TDD: As we know, test-driven development is common practice in Agile, where tests are prepared before the code is written or developed to ensure that the code meets the acceptance criteria. This promotes a "test-first" mindset among developers. Regression testing: As the product evolves with each iteration, regression testing becomes critical. New functionality changes or features shouldn't introduce regression, which can break existing functionality. Minimal documentation: Agile Testing often relies on lightweight documentation, focusing more on working software than extensive test plans and reports. Test cases may be captured as code or in simple, accessible formats. Collaboration: All Agile teams are cross-functional, with all the groups of people and skills needed to deliver value across traditional organizational silos, largely eliminating handoffs and delays. The term "Agile testing quadrants" refers to a concept introduced by Brian Marick, a software testing expert, to help teams and testers think systematically about the different types of testing they need to perform within an Agile development environment. At Scale, many types of tests are required to ensure quality: tests for code, interfaces, security, stories, larger workflows, etc. By describing a matrix (having four quadrants defined across two axes), many types of tests are necessary to ensure quality: tests for code, interfaces, security, stories, larger workflows, etc. That guides the reasoning behind these tests. Extreme Programming (XP) proponent and Agile Manifesto co-author Brian Marick helped pioneer agile testing. Agile Testing: Quadrants Q1- Contains unit and component tests. The test uses Test-Driven Development (TDD). Q2- Feature-level and capability-level acceptance tests confirm the aggregate behavior of user stories. The team automates these tests using BDD techniques. Q3- Contains exploratory tests, user acceptance tests, scenario-based tests, and final usability tests. these tests are often manual. Q4- To verify if the system meets its Non-functional Requirements (NFRs). Like Load and performance testing
Verifying code changes with unit tests is a critical process in typical development workflows. GitHub Actions provides a number of custom actions to collect and process the results of tests allowing developers to browse the results, debug failed tests, and generate reports. In this article, I show you how to add unit tests to a GitHub Actions workflow and configure custom actions to process the results. Getting Started GitHub Actions is a hosted service, so you all need to get started is a GitHub account. All other dependencies like Software Development Kits (SDKs) are installed during the execution of the GitHub Actions workflow. Selecting an Action GitHub Actions relies heavily on third-party actions contributed by the community. A quick Google search shows at least half a dozen actions for processing unit test results, including: action-junit-report publish-unit-test-results junit-report-action test-reporter report-junit-annotations-as-github-actions-annotations To narrow the selection, you need to consider the following functionality: Does the action support your testing framework? For example, some actions only process JUnit test results, while others include additional formats like TRX. Does the action allow you to fail the workflow based on the presence of failed tests? Does the action annotate the source code with details of test results? Does the action generate a useful report? How many stars does the project have? After some trial and error, I settled on the test-reporter action, which is demonstrated in this post. Unit Testing in Java The workflow file shown below runs tests with Maven and processes the results with the test-reporter action: name: Java on: push: workflow_dispatch: jobs: build: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v1 - name: Set up JDK 1.11 uses: actions/setup-java@v2 with: java-version: '11' distribution: 'adopt' - name: Build run: mvn --batch-mode -DskipTests package - name: Test run: mvn --batch-mode -Dmaven.test.failure.ignore=true test - name: Report uses: dorny/test-reporter@v1 if: always() with: name: Maven Tests path: target/surefire-reports/*.xml reporter: java-junit fail-on-error: true The Build, Test, and Report steps are important to the testing process. You start by building the application, but skipping the tests: - name: Build run: mvn --batch-mode -DskipTests package Next, you run the tests, allowing the command to pass even if there are failing tests. This allows you to defer the response to failed tests to the test processing action: - name: Test run: mvn --batch-mode -Dmaven.test.failure.ignore=true test In the final step, you generate a report from the JUnit XML file. The if property is set to always run this step, allowing you to generate the report even if the Test step above was set to fail in the event of failed tests. The fail-on-error property is set to true to fail this workflow if there were failed tests. This is an example of deferring the response to failed tests to the test processing action: - name: Report uses: dorny/test-reporter@v1 if: always() with: name: Maven Tests path: target/surefire-reports/*.xml reporter: java-junit fail-on-error: true The test results are displayed as a link under the original workflow results: Failing tests show additional details such as the name of the test, the test result, and the raw test output: Unit Testing in DotNET The workflow file shown below runs tests with the DotNET Core CLI and processes the results with the test-reporter action: name: .NET Core on: push: workflow_dispatch: jobs: build: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v1 - name: Setup .NET Core uses: actions/setup-dotnet@v1 with: dotnet-version: 3.1.402 - name: Build run: dotnet build --configuration Release - name: Test run: dotnet test --logger "trx;LogFileName=test-results.trx" || true - name: Test Report uses: dorny/test-reporter@v1 if: always() with: name: DotNET Tests path: "**/test-results.trx" reporter: dotnet-trx fail-on-error: true The tests are executed by the DotNET Core CLI saving the results as a Visual Studio Test Results (TRX) report file. The test command returns a non-zero exit code if any tests fail, but you defer responsibility for responding to failed tests to the test processor. By chaining || true to the command you ensure the step always passes: - name: Test run: dotnet test --logger "trx;LogFileName=test-results.trx" || true The test-reporter action then processes the report file, and sets fail-on-error to true to fail the build if there are any failed tests: - name: Test Report uses: dorny/test-reporter@v1 if: always() with: name: DotNET Tests path: "**/test-results.trx" reporter: dotnet-trx fail-on-error: true Conclusion GitHub Actions is primarily a task execution environment designed to verify and build code and publish the resulting artifacts. There are a number of third-party actions that allow you to generate test reports and respond to failed tests, but GitHub Actions has some gaps in terms of tracking test results over time. Still, the reporting functionality available today is useful, and will only improve. In this post, you learned: Some of the questions to ask when evaluating third-party actions to process test results How to write basic workflows for testing Java and DotNET Core applications How to process test results and display the generated reports Happy deployments!
Debugging is an integral part of software development. However, as projects grow in size and complexity, the process of debugging requires more structure and collaboration. This process is probably something you already do, as this process is deeply ingrained into most teams. It's also a core part of the academic theory behind debugging. Its purpose is to prevent regressions and increase collaboration in a team environment. Without this process, any issue we fix might come back to haunt us in the future. This process helps developers work cohesively and efficiently. The Importance of Issue Tracking I'm sure we all use an issue tracker. In that sense, we should all be aligned. But do you sometimes "just fix a bug"? Without going through the issue tracker? Honestly, I do that a lot. Mostly in hobby projects but occasionally even in professional settings. Even when working alone, this can become a problem... Avoiding Parallel Work on the Same Bug When working on larger projects, it's crucial to avoid situations where multiple developers are unknowingly addressing the same issue. This can lead to wasted effort and potential conflicts in the codebase. To prevent this: Always log bugs in your issue-tracking system. Before starting work on a bug, ensure it's assigned to you and marked as active. This visibility allows the project manager and other team members to be aware, reducing the chances of overlapping work. Stay updated on other issues. By keeping an eye on the issues your teammates are tackling, you can anticipate potential areas of conflict and adjust your approach accordingly. Assuming you have a daily sync session or even a weekly session, it's important to discuss issues. This prevents collision, where a teammate can hear the description of the bug and might raise a flag. This also helps in pinpointing the root cause of the bug in some situations. An issue might be familiar, and communicating through it leaves a "paper trail." As the project grows, you will find that bugs keep coming back despite everything we do. History that was left behind in the issue tracker by teammates who are no longer on the team can be a lifesaver. Furthermore, the statistics we can derive from a properly classified issue tracker can help us pinpoint the problematic areas of the code that might need further testing and maybe refactoring. The Value of Issue Over Pull Requests We sometimes write the comments and information directly into the pull request instead of the issue tracker. This can work for some situations but isn't as ideal for the general case. Issues in a tracking system are often more accessible than pull requests or specific commits. When addressing a regression, linking the pull request to the originating issue is vital. This ensures that all discussions and decisions related to the bug are centralized and easily traceable. Communication: Issue Tracker vs. Ephemeral Channels I use Slack a lot. This is a problem; it's convenient, but it's ephemeral, and in more than one case, important information written in a Slack chat was gone. Emails aren't much of an improvement, especially in the long term. An email thread I had with a former colleague was cut short, and I had no context as to where it ended. Yes, having a conversation in the issue tracker is cumbersome and awkward, but we have a record. Why We Sometimes Avoid the Issue Tracker Developers might sometimes avoid discussing issues in the tracker because: Complex discussions: Some topics might feel too broad or intricate for the issue tracker. Fear of public criticism: No one wants to appear ignorant or criticize a colleague in a permanent record. As a result, some discussions might shift to private or ephemeral channels. However, while team cohesion and empathy are crucial, it's essential to log all relevant discussions in the issue tracker. This ensures that knowledge isn't lost, especially if a team member departs. The Role of Daily Meetings Daily meetings are invaluable for teams with multiple developers working on related tasks. These meetings provide a platform for: Sharing updates: Inform the team about your current theories and direction. Engaging in discussions: If a colleague's update sounds familiar, it's an opportunity to collaborate and avoid redundant work. However, it's essential to keep these meetings concise. Detailed discussions should transition to the issue tracker for a comprehensive record. I prefer two weekly meetings as I find it's the optimal number. The first day of the week is usually a ramp-up day. Then we have the first meeting in the morning of the second day of the week and the second meeting two days later. That reduces the load of a daily meeting while still keeping information fresh. The Role of Testing in Debugging We all use tests when developing (hopefully), but debugging theory has a special place for tests. Starting With Unit Tests A common approach to debugging is to begin by creating a unit test that reproduces the issue. However, this might not always be feasible before understanding the problem. Nevertheless, once the problem is understood, we should: Create a test before fixing the issue. This test should be part of the pull request that addresses the bug. Maintain a coverage ratio. Aim for a coverage ratio of 60% or higher per pull request to ensure that changes are adequately tested. A test acts as a safeguard against a regression. If the bug resurfaces, it will be a slightly different variant of that same bug. Unit Tests vs. Integration Tests While unit tests are fast and provide immediate feedback, they primarily prevent regressions. They might not be as effective in verifying overall quality. On the other hand, integration tests, though potentially slower, offer a comprehensive quality check. They can sometimes be the only way to reproduce certain issues. Most of the difficult bugs I ran into in my career were in the interconnect area between modules. This is an area that unit tests don't cover very well. That is why integration tests are far more important than unit tests for overall application quality. To ensure quality, focus on integration tests for coverage. Relying solely on unit test coverage can be misleading. It might lead to dead code and added complexity in the system. However, as part of the debugging process, it's very valuable to have a unit test as it's far easier to debug and much faster. Final Word A structured approach to debugging, combined with effective communication and a robust testing strategy, can significantly enhance the efficiency and quality of software development. This isn't about convenience; the process underlying debugging is like a paper trail for the debugging process. I start every debugging session by searching the issue tracker. In many cases, it yields gold that might not lead me to the issue directly but still points me in the right direction. The ability to rely on a unit test that was committed when solving a similar bug is invaluable. It gives me a leg up on resolving similar issues moving forward.
Sometimes, test teams may be perplexed about how to switch to agile. If you work on such a team, you almost certainly have manual regression tests, both because you’ve never had the time to automate them or because you test from the UI, and it does not make logical sense to automate them. You most likely have excellent exploratory testers who can uncover defects within complicated systems, but they do not automate their testing and require a finished product before they begin testing. You understand how to schedule testing for a release, but everything must be completed within a two-, three- or four-week iteration. How do you pull it off? How do you stay up with technological advancements? This is a persistent challenge. In many businesses, developers believe they have moved to agile, but testers remain buried in manual testing and cannot “stay current” after each iteration. When I communicate to these professionals that they are only experiencing a portion of the benefits of their agile transformation, developers and testers say that the testers are too sluggish. Done Means DONE! The challenge is not because the testers are too slow but because the team cannot own “done,” and until the team owns “done” and contributes to accomplishing it, the testers will look too sluggish. In every iteration, agile teams can deliver a functional product. They are not obligated to release, but the software is expected to be of sufficient quality. That indicates the testing, which is about risk management, is over. After all, how can you release if you don’t know the risks? Testing provides knowledge about the product being tested. The tests do not establish that the product is perfect or that the engineers are excellent or bad, but instead that the product either does or does not accomplish what we expected to accomplish. This implies that the testing must be consistent with the product. If the product features a graphical user interface, the testing will have to use it at some point. However, there are several strategies for testing inside a system. The approach to test from within the GUI is to develop the tests as you go, so you don’t have to test from beginning to end and still get relevant information on the product under test. If programmers just test at the unit level, they have no idea if a component is complete. If the testers cannot complete the testing from a system-level perspective, they do not know if a feature is functional. How, then, can you consider an iteration done if no one knows if a functionality is ready? You simply cannot. That is why having a collaborative definition of done is crucial. Is a story complete once the developers have tested it? Is a narrative complete once it has been integrated and built into an executable by the developers? What about the setup? How much testing does a feature require to determine whether or not it is complete? There is no unique correct solution for every team. So, every team must evaluate its product, consumers, and challenges and reach a conclusion, “OK, we can say it’s done if all of the code has been checked in, reviewed by someone, or written in pairs; all of the developer tests have been completed; and all of the system tests for this feature have been created and run under the GUI. Every few days, we’ll handle GUI-based checking, but we won’t test using the GUI.” I’m not sure whether that’s an acceptable definition of done for your business. It would help to consider the consequences of not doing periodic GUI testing on your product. Perhaps you don’t have a graphical user interface for your product, but you do have a database. Do the developer tests require database access? Perhaps, perhaps not. Does the testing process require access permission? I’d assume so, but maybe you have a product I’m not familiar with, and maybe they don’t really have to all the time. Perhaps additional automated tests that test db updates or migrations before anything else are required. “Done” is determined by your product and its risks. Consider the consequences of launching a product without various types of testing, and then you’ll understand what you require in an iteration to achieve a release-ready product. Give a Mouse a Cookie Once you’ve determined what you anticipate in an iteration, you’ll most likely require testing. Then, you’ll run into the “Give a Mouse a Cookie” scenario. In a charming child’s book that bears the same name, if you give a mouse a cookie, he wants a glass of milk to go with it. Then, he’ll need a rag to wipe the milk from his lips and a broom to sweep the crumbs off the ground. The need for more and more continues until the mouse becomes exhausted and desires another cookie, which restarts the cycle. This is what emerges when a testing team seeks the “ultimate” test framework for their product. It’s an understandable desire. Unfortunately, you don’t always know what the ideal structure is until the project is finished. If you wait until the product is finished, testing will be included towards the end of the project, which is too little, too late. Rather than building a flawless test framework, consider creating a just-good-enough test framework, for the time being, to refactor it as you go. This gives the testing team enough automation to get started and growing familiarity with the automation as the iteration and project progress. It does not bind you to a framework that no longer serves you because you have invested so much money and time in designing it. Bear in mind that testers are similar to consumers. Just as your product’s consumers can’t necessarily tell what they want or need until they see it, testers can’t always tell what test automation framework they desire or require until they begin using it. “Done” is the Result of a Collaborative Effort What does the test squad “sustain” with development when you need to comprehend what “done” means and you construct a just-good-enough framework for testing? By ensuring that the whole team works on a story until it is finished. Assume you have a story that calls for two programmers and one tester. The feature is created collaboratively by the developers. Simultaneously, the tester reshapes the tests, adds sufficient automation, or integrates the test into the current automation framework. But what if you’re shifting to agile and don’t have a framework? Then, one (or even more) of the programmers collaborate with the tester to build a suitable framework and integrate the tests for this functionality into it.No law says programmers cannot assist testers in establishing test frameworks, writing test frameworks, or even writing tests to facilitate the completion of a story. Given that you have a team definition of “done,” doesn’t it make sense for team members to assist one another in getting things done? Closing Once you understand what “done” means for a story and the entire team is committed to completing it, you can build a culture where the testing process can convert to agile. The cross-functional project team may move to agile as long as the programmers help with testing frameworks, the business analysts help with narrative refinement, and the testers help deliver data on the product under test. Switching to an agile methodology involves the entire project team, not just the programmers. If you have testers that can’t “catch pace,” it’s not the responsibility of the testers. It is a problem for the whole team. You need to resolve the issue with the team, change their mindset, and let them see the benefit of going in this direction. Even if you start with mediocre test frameworks, you can modify them into something spectacular over time; it’s up to you.
A term you have probably heard a lot nowadays is continuous testing. Continuous testing, explained simply, is about testing everywhere across the software development lifecycle and should include activities beyond automation, such as exploratory testing. Continuous testing implies that testing is not shifted but is found at every stage of the software development lifecycle. This is supported by the famous Continuous Testing in DevOps model created by Dan Ashby, which you can see in Figure 1. Continuous testing in the DevOps model. In addition, continuous testing is not to be mistaken for shift-left testing, as these are two different approaches. While the shift-left approach to testing requires you to shift the testing process earlier in the development lifecycle so issues can be found earlier, continuous testing highlights that testing happens at all stages of the project lifecycle and is embedded throughout the infinite software development cycle. Therefore, the shift left approach shouldn't be used as an excuse not to perform continuous testing. Continuous testing goes beyond shift-left testing. In this article, I will focus on: Challenging how others might define continuous testing. How to implement continuous performance testing. What Is Continuous Testing? “Continuous testing refers to the execution of automated tests that are carried out at regular intervals every time code changes are made. These tests are conducted as a part of the software delivery pipeline to drive faster feedback on recent changes pushed to the code repository.” The above is taken from BrowserStack's definition of continuous testing, which is generally what most organizations also use. Automated tests executed as part of a build pipeline encompass unit, integration, end-to-end tests, performance tests, and more. While automation plays a significant role in enabling continuous testing, continuous testing is not just about having automated tests run as part of a build pipeline. The left side of the continuous testing model in Figure 1 shows that testing happens when you test the plan by getting involved in discussions earlier and challenging requirements. It happens when you test the branch by pulling the code locally and exploring the features or even when testing your team’s branching strategies. It happens when you test the code via code reviews and ensure that automated tests are in place. It even happens during the merging and build process by checking if your build pipelines have built the correct versions. On the other hand, the right side of the continuous testing model in Figure 1 shows that once you have released and deployed your changes, testing also happens continuously. If you use deployment techniques such as canary releases or blue/green deployments, these are also tested to ensure the deployment is successful. When it’s been deployed to production, real users test the changes continuously. But it doesn’t stop there. Testing also occurs during the observability and monitoring stage. You can gather the metrics and use this data to drive improvements continuously. Continuous testing goes beyond automated checks. It could also be found beyond the continuous testing model, such as testing ideas before the software development lifecycle. Continuous testing needs to be built in team cultures with openness to learning, collaboration, and even experimentation. Team members must be encouraged to try different approaches and experiment on which works best for their testing needs. How To Implement Continuous Performance Testing Before I set some guidelines on implementing continuous performance testing, let’s look at how performance testing is done traditionally and explore why this doesn’t scale well nowadays. Traditionally, performance testing is seen as an activity you do right at the end before releasing it to production. It’s done after you have verified the main functionalities of a system and often requires a specialized group of performance testers, which creates a siloed approach. Since it is left right at the end, any issues found during performance testing are expensive. Trying to fit the traditional performance testing approach into Agile ways of working simply doesn’t work due to the rapid development of features and the need to release these features quickly. So, how can you implement a continuous approach to performance testing? Is it by introducing automated performance tests that are triggered automatically when there are new changes added? While this is important, remember that continuous testing is more than just automation. The following sections give an overview of how you can incorporate performance testing across all stages of the software development life cycle, with reference to Dan Ashby’s continuous testing model in Figure 1. The sections below are not set in stone but just guidelines, so always consider the context of your projects. Plan Performance testing requirements need to be discussed as early as possible and incorporated as part of a user story to increase awareness. From personal experience, if your team has a performance champion, it is easier to convince the rest of the team why performance testing needs to be done earlier. A performance champion can be part of a wider Community of Practice, especially focused on performance-recommended practices that educate different teams. Performance testing activities during the planning stage. Make it a practice to discuss performance requirements as part of every feature and create acceptance criteria for it based on existing Service Level Agreements (SLA) and Service Level Objectives (SLO) that are in place. If there are no SLAs or SLOs, collaborate on what these should be. Consider including performance requirements in your definition of done (DoD). This will increase awareness that performance is included continuously in every feature that is being developed. If your teams are practicing three amigos or story-shaping sessions to expand requirements, ensure that everyone understands what is expected from a performance point of view. Branch and Code You and your team can also investigate the code and check for any possible performance bottlenecks that might arise. This is also where it’s useful to write automated performance tests at the same time as the code is being developed. Having a modern performance testing tool can help as you get wider buy-ins if writing performance checks offer a great developer experience. Performance testing activities during the branching and coding stage. Performance checks at this stage can be lower-level component testing rather than a full-blown end-to-end performance. Examples of component-level testing that you can do from a performance perspective include: Focusing on protocol-level tests without involving a UI. Targeting specific API endpoints and observing the response times when introduced with a gradual increase in load. Finding the breaking point of an API endpoint by performing stress or spike testing earlier on. Beyond automation, you can also try pair testing and explore the application locally and aim to find performance issues that can’t be caught by automated tests, such as checking the perceived performance of your application. Merge When developers push their code into the build pipelines and with the introduction of feature environments in some cases, it’s important to include fast and reliable automated tests to enable a fast feedback loop. From a performance perspective, you can run the component-level performance tests identified in the previous stage as part of your CI/CD pipelines. You may also want to add the tests you have created as part of a smoke performance tests suite that verifies that your application can operate when exposed to a minimal load. From a backend performance perspective, these tests should run with fewer virtual users or shorter duration. If you are focused on front-end or client-side performance, you can also run basic checks on a few pages to get data using tools specifically for client-side performance. Performance testing activities during the merging stage. To have better confidence, you can use performance heuristics during exploratory testing once changes are deployed to an environment (this could also be done earlier!). Build When features are merged into your main branch, this is where you can run more performance checks that are closer to what your user experiences. Instead of component-level performance tests, you can focus on end-to-end performance tests because you still need to verify the main user flows that your typical user would perform. Examples of end-to-end testing that you can do from a performance perspective include: Running your average load tests automatically to assess how the system performs under a typical load whenever the code is deployed to a staging environment. Running end-to-end performance tests to simulate a user journey flow and find blind spots on the browser level whenever a backend service is exposed to a high load. At this stage, your performance tests are more realistic and try to simulate what a typical user would do. Optional stress, spike, or soak tests that can be triggered manually on your CI pipeline If you have integrated your results into a visualization dashboard, you can continuously observe the performance trends and use the data to inform your team whether it can be deployed to production safely. Performance testing activities during the building stage. Release and Deploy When features are released in production, you can perform health checks to verify if the deployment has been successful. Performance testing activities during the release and deployment stage. Some companies also don’t have pre-production environments suitable for performance testing. In this case, you can test in production but do it safely and in a limited capacity because you don’t want it to be disruptive to your users. For example, you can perform load tests during an agreed window, normally during off-peak hours. Operate and Monitor Once your code is live and in production, your users continuously test it. To understand user pain points related to performance, have a channel where you can get their feedback and incorporate it into the next iteration of your software development life cycle. This also allows you to learn more about your system based on how different people use it, which leads to testing ideas you can apply in the next cycle. Performance testing activities during the operation and monitoring stage. Having a monitoring solution is also one way of continuously testing your application. This gives you real-time results on any performance errors that your users might experience, whether it’s any pages that have slower response times or any failed requests that might be happening on different data services. Conclusion Continuous testing allows us to learn about our system in all stages of the software development lifecycle. From a performance perspective, continuous testing doesn’t mean more automated performance tests. Rather, it’s about embracing that in each stage of the software development life cycle, performance needs to be embedded and improved continuously.
You’re working hard to transform your ways of working with a range of different goals. Common aims of digital transformations include: To become more Agile; To deliver faster through DevOps; To migrate all of your systems to the cloud; To enable regular change. Whatever your desired outcome, there’s one common problem that most (everybody really) ignore. Yet, overlooking this problem ultimately means that the initiative will fail, become delayed, cost too much, or generally become severely hampered going forward. This perennial (and perennially ignored) problem is “accidental complexity." This includes the accidental complexity already inherent in the way you make changes today or the accidental complexity that you’ll introduce in the future because of how you choose to make changes tomorrow. “ While essential complexity is inherent and unavoidable, accidental complexity is caused by the chosen approach to solve the problem.” Hugo Sereno Ferreira, "Incomplete by Design: Thoughts on Agile Architectures" (Agile Portugal: 2010). Organizations rarely have the opportunity to start fresh when it comes to IT systems. Any Agile transformation — which is fundamentally a move to small, iterative, emergent change — is completed within brown-field architectures. These inevitably have accidental complexity built in. This article sets out different symptoms of accidental complexity, discussing how they derail your transformation initiatives. My previous blog then offers inspiration for how you can solve this accidental complexity. Too Many UI Tests Because interfaces aren’t particularly well understood or documented, organizations are forced back to creating tests that focus on the user interface. This over-focus on UI testing is inefficient and costly while undermining our ability to test early and iteratively. It further tends to have low overall coverage, exposing systems to bugs. Combinatorial Explosions — Subjective Coverage Because the understanding of our system is at the e2e user flow level, we have a multiplying explosion of business logic: This “combinatorial explosion” is complex beyond human comprehension and is impossible to test against within a reasonable timeframe. In this scenario, the only way to achieve a valuable outcome in testing is to apply a risk-based approach. Yet, this is rarely recognized, or risk is based on an SME's opinion of what is “enough” testing. Bloated Regression Test Pack With Lots of Duplication This combinatorial explosion multiplies complexity in your testing but also in your ability to understand your systems. The problem simply becomes too big to understand. We are all taught to break problems down into smaller parts, but this seems to allude to many test approaches. Huge Data Requirements The large volume of tests needed to traverse the multiple systems in e2e journeys proliferates the demand for test data. Test data becomes embroiled in complexity, not just because the data required for the test isn’t well understood but also because of the systems of record for which these data items reside. These systems are themselves under constant change, during which accidental complexity is playing its part. The systems are often poorly understood and poorly documented. Provisioning data for testing, in turn, isn’t the transactional request you thought it was. It risks massive complexity, massive labor, errors, and bottlenecks: Huge Test Environment Requirements The snowball of complexity continues to grow: Because I need e2e tests, I need e2e environments. Testing and development, in turn, need numerous channels, middleware, and systems of record. Often, these will be legacy (mainframe) systems that you can’t build overnight. In fact, organizations have often lost the ability and knowledge to build these systems from the ground up. As a consequence, our only choice left is to use the finite number of fully integrated e2e environments available in the organization. Yet, even one of these environments will cost millions in infrastructure alone and the same multiple times over in resources to maintain. And that’s not the only problem. Organizations have many teams making changes, and they all need to test e2e. Teams queue for environments, creating a huge bottleneck that drains the organization’s change budget. Test Drift From the System Under Test A separation between what is being tested and how the system actually works will inevitably occur if you don’t have effective means to refactor what you are testing in line with what is being tested. Very rarely will you see a team talk about how they refactor test assets because most don’t do it. This not only leads to test bloat but creates outdated and invalid tests and misalignment in what your test efforts cover. Organizations Are Much More Than a Structure Chart The problems discussed so far are much more systemic than testing. Accidental complexity additionally stems from organizations and their structures. Challenges include: Organizations are siloed, and so are IT change teams. Conway’s law tells us that these silos create an architecture where interfaces are not well understood or maintained. Teams don’t talk to each other unless they have to…. IT change creates “layering” in the understanding of how systems work. As systems grow, they become increasingly complex, with more unknowns. The three ways of DevOps talk to Flow, Feedback, and Experimentation/Learning. Flow talks about “Never allowing local optimization to create global degradation.” This should cause teams to rethink the way in which they approach change, but it rarely seems to [1]. This quote indicates the need for collaboration across teams. Such collaboration is blocked by the “pizza box-sized team,” who carry on working in a silo, chuck their work over the fence, and find problems during large end-to-end integration testing events. Whilst a team can work in a silo as much as they can, they inevitably need to integrate the system they are working on with the rest of the organization. This choice of approach might have been taken as the path of least resistance to get started. It might even have been taken in the name of experimentation or a “start-up” initiative within a larger organization. Whatever the rationale, it likely did not consider the accidental complexity such an approach creates or contributes to within the wider organization. You might see testing as the barometer of an organization’s maturity when making change. If systems are testable, change is understood and observable. If quality and risk are discussed, you have a healthy ecosystem. If all of this seems too hard, you will unfortunately continue down the increasing spiral of complexity. Can't Change, Won't Change “Culture eats strategy for breakfast.” — Peter Drucker Whilst you will undoubtedly recognize many of the points raised in this article, the biggest challenge isn’t knowing how to change but rather wanting to change. Many organizations work to a certain drumbeat. The innovators go off and get the latest tech, not yet realizing they are just building the same problems with a flashier tool. Then you have the laggards who are stuck in the way of working from yesteryear. They will say, “We did Agile before it was Agile,” and yearn to go back to a time when there was even less documentation and even less change or version control. Each has some good practices, but both create more accidental complexity and optimize locally, not globally, across the organization. So, how will you face your organization's accidental complexity? “Accidental complexity is when something can be scaled down or made less complex, not lose any value, and likely add value because it's been simplified.” Kristi Pelzel, “Design Theory: Accidental and Essential Complexity” (Medium: 2022) References [1] Gene Kim, “The Three Ways: The Principles Underpinning DevOps” by Gene Kim (IT Revolution: 2012).
In the ever-evolving landscape of software development, ensuring the resilience of your applications has never been more critical. Unexpected disasters or system failures can lead to costly downtime and damage to your reputation. This is where business continuity and disaster recovery (BCDR) testing comes into play. In this article, we'll explore the significance of BCDR testing and how to seamlessly integrate it into your software development lifecycle (SDLC). Understanding BCDR Testing Before diving into the specifics of BCDR testing, it's important to grasp its fundamental concept. BCDR testing involves assessing the readiness of your software systems to withstand disasters, system failures, or any adverse events that may disrupt normal operations. It goes beyond typical testing and quality assurance efforts and focuses on ensuring that your software can recover swiftly and effectively in the face of adversity. Types of BCDR Testing BCDR testing encompasses several distinct types, each serving a unique purpose in evaluating the resilience of your software: 1. Disaster Recovery Testing Disaster recovery testing aims to validate the effectiveness of your disaster recovery plan. It involves simulating disaster scenarios to ensure that your recovery processes are robust and that your applications can be restored to full functionality within predefined recovery time objectives (RTOs). 2. Failover Testing Failover testing is crucial for applications that require high availability. This type of testing checks whether your system can seamlessly switch to a backup server or environment when the primary one fails. It helps ensure uninterrupted service for your users. 3. Data Backup and Recovery Testing Data backup and recovery testing focus on the integrity and recoverability of your data. It verifies that your data backups are up to date, accurate, and can be successfully restored in case of data loss or corruption. 4. Scenario-Based Testing Scenario-based testing involves imitating real-world disaster events, such as natural disasters, cyberattacks, or power outages. It evaluates your software's response to these events and assesses whether your recovery strategies align with the specific challenges posed by each scenario. 5. Redundancy Testing Lastly, redundancy testing examines the effectiveness of your redundancy and fail-safe mechanisms. It ensures that redundant components or systems can seamlessly take over when the primary ones fail, minimizing downtime. Benefits of BCDR Testing Investing time and resources in BCDR testing is a strategic move that yields a multitude of benefits for software development teams. These advantages collectively contribute to the overall resilience and dependability of your software systems. Some of these benefits are: 1. Reduced Downtime One of the primary advantages of BCDR testing is the reduction of downtime. By identifying and addressing vulnerabilities in your software's recovery processes, you can minimize the impact of disasters or failures, ensuring that your applications stay operational. 2. Improved Reliability BCDR testing also enhances the reliability of your software. It instills confidence that your applications can bounce back from disruptions, contributing to a more resilient and trustworthy user experience. 3. Regulatory Compliance For organizations operating in regulated industries, BCDR testing is often a compliance requirement. Meeting these regulations not only establishes legal adherence but also reinforces your commitment to data security and availability. 4. User Confidence Users expect uninterrupted access to your software services. Demonstrating that you are well-prepared for disasters and unexpected events fosters user confidence and loyalty. Incorporating BCDR Testing into Your SDLC Now that we've established the importance of BCDR testing let's explore how to seamlessly integrate it into your software development lifecycle. 1. Assessment Begin by assessing your current BCDR readiness. Identify the critical components of your applications, potential points of failure, and your existing disaster recovery plan (if any). This assessment serves as a baseline for improvement. 2. Test Planning Develop a comprehensive BCDR test plan. Outline the types of tests you'll conduct, the specific scenarios you'll simulate, and the resources required. Consider the RTOs and recovery point objectives (RPOs) for your applications. 3. Test Execution Execute a series of BCDR tests based on your plan. Engage in the types of testing we went over above as they relate to your application's needs. 4. Analysis and Improvement After each test, thoroughly analyze the results. Identify any weaknesses or bottlenecks in your recovery processes and make necessary improvements. Ensure that your recovery procedures are well-documented and accessible to the appropriate personnel. 5. Automation Leverage automation tools to streamline your BCDR testing efforts. Automation not only saves time but also ensures consistency and accuracy in your testing processes. Best Practices for Effective BCDR Testing While implementing BCDR testing, adhere to best practices to maximize its effectiveness: Realistic scenarios: Create disaster scenarios that closely mirror potential real-world events. The more realistic your tests, the better prepared your team will be when faced with actual disasters. Regular testing: BCDR testing should be an ongoing process. Develop a routine testing schedule to continually assess and improve your software's resilience. Documentation: Maintain detailed records of your BCDR tests, outcomes, and any adjustments made. This documentation is invaluable for future reference and compliance requirements. Cross-functional collaboration: Involve various teams in your BCDR testing efforts. Collaboration between development, operations, and security teams ensures a holistic approach to disaster recovery. Challenges and Pitfalls As with any testing initiative, BCDR testing comes with its challenges and potential pitfalls: Resource constraints: Resource limitations, such as budget and personnel, can pose challenges to comprehensive BCDR testing. Prioritize critical systems and allocate resources strategically. Test complexity: BCDR testing can be complex, especially when dealing with intricate applications and multiple disaster scenarios. Develop a clear strategy to manage the intricacies effectively. Integration issues: Ensure that your BCDR tests seamlessly integrate with your SDLC. Incompatibilities or conflicts can hinder the success of your testing efforts. The Bottom Line Incorporating BCDR testing into your SDLC is not just a best practice; it's a strategic imperative in today's dynamic technological landscape. It's a proactive approach that pays dividends when disasters strike and serves as a testament to your commitment to providing unwavering service to your users and stakeholders. Embrace BCDR testing as a fundamental component of your software development strategy, and you'll be primed to navigate the unpredictable terrain of the digital world with confidence. Additional Resources For further reading and resources on BCDR testing and related topics, explore the following: Disaster Recovery BCDR Guide Business Continuity and Disaster Recovery BCDR Recovery Plan
This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report As per the reports of Global Market Insight, the automation testing market size surpassed $20 billion (USD) in 2022 and is projected to witness over 15% CAGR from 2023 to 2032. This can be attributed to the willingness of organizations to use sophisticated test automation techniques as part of the quality assurance operations (QAOps) process. By reducing the time required to automate functionalities, it accelerates the commercialization of software solutions. It also offers quick bug extermination and post-deployment debugging, and it helps the integrity of the software through early notifications of unforeseen changes. Figure 1: QAOps cycle What Is the Automated Testing Lifecycle? The automation testing lifecycle is a multi-stage process that covers the process of documentation, planning, strategy, and design. The cycle also involves development of the use cases using technology and deploying it to an isolated system that could run on specific events or based on a schedule. Phases of the Automated Testing Lifecycle There are six different phases of the automated testing lifecycle: Determining the scope of automation Architecting the approach for test automation (tools, libraries, delivery, version control, CI, other integrations) Setting the right test plan, test strategy, and test design Automation environment setup Test script development and execution Analysis and generation of test reports Figure 2: Automated testing lifecycle Architecture Architecture is an important part of the automation lifecycle that leads to defining the strategy required to start automation. In this phase of the lifecycle, the people involved need to have a clear understanding of the workflows, executions, and required integrations with the framework. Tools of the Trade In today’s automation trends, the new buzzword is "codeless automation," which helps accelerate test execution. There are a few open-source libraries as well, such as Playwright, which use codeless automation features like codegen. Developing a Framework When collaborating in a team, a structured design technique is required. This helps create better code quality and reusability. If the framework is intended to deliver the automation of a web application, then the team of automation testers need to follow a specific design pattern for writing the code. Execution of Tests in Docker One important factor in today’s software test automation is that the code needs to be run on Docker in isolation every time the test runs are executed. There are a couple of advantages to using Docker. It helps set up the entire testing environment from scratch by removing flaky situations. Running automation tests on containers can also eliminate any browser instances that might have been suspended because of test failures. Also, many CI tools support Docker through plugins, and thus running test builds by spinning a Docker instance each time can be easily done. Continuous Testing Through CI When it comes to testing in the QAOps process, CI plays an important role in the software release process. CI is a multi-stage process that runs hand in hand when a commit is being made to a version control system to better diagnose the quality and the stability of a software application ready for deployment. Thus, CI provides an important aspect in today’s era of software testing. It helps to recover integration bugs, detect them as early as possible, and keep track of the application's stability over a period of time. Setting up a CI process can be achieved through tools like Jenkins and CircleCI. Determining the Scope of Test Automation Defining the feasibility for automation is the first step of the automation lifecycle. This defines the scope and automates the required functionality. Test Case Management Test case management is a technique to prioritize or select the broader scenarios from a group of test cases for automation that could cover a feature/module or a service as a functionality. In order to ensure the top quality of products, it is important that complexity of test case management can scale to meet application complexity and the number of test cases. The Right Test Plan, Test Strategy, and Test Design Selecting a test automation framework is the first step in the test strategy phase of an automated testing lifecycle, and it depends on a thorough understanding of the product. In the test planning phase, the testing team decides the: Test procedure creation, standards, and guidelines Hardware Software and network to support a test environment Preliminary test schedule Test data requirements Defect tracking procedure and the associated tracking tool Automation Environment Setup The build script to set up the automation environment can be initiated using a GitHub webhook. The GitHub webhook can be used to trigger an event in the CI pipeline that would run the build scripts and the test execution script. The build script can be executed in the CI pipeline using Docker Compose and Docker scripts. docker-compose.yml: version: "3.3" services: test: build: ./ environment: slack_hook: ${slack_hook} s3_bucket: ${s3_bucket} aws_access_key_id: ${aws_access_key_id} aws_secret_access_key: ${aws_secret_access_key} aws_region: ${aws_region} command: ./execute.sh --regression Dockerfile FROM ubuntu:20.04 ENV DEBIAN_FRONTEND noninteractive # Install updates to base image RUN apt-get -y update && \ apt-get -y install --no-install-recommends tzdata && \ rm -rf /var/lib/apt/lists/* # Install required packages ENV TZ=Australia/Melbourne RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone RUN dpkg-reconfigure --frontend noninteractive tzdata RUN apt-get -y update && \ apt-get install -y --no-install-recommends software-properties-common \ apt-utils \ curl \ wget \ unzip \ libxss1 \ libappindicator1 \ libindicator7 \ libasound2 \ libgconf-2-4 \ libnspr4 \ libnss3 \ libpango1.0-0 \ fonts-liberation \ xdg-utils \ gpg-agent \ git && \ rm -rf /var/lib/apt/lists/* RUN add-apt-repository ppa:deadsnakes/ppa # Install chrome RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - RUN sh -c 'echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/ google-chrome.list' RUN apt-get -y update \ && apt-get install -y --no-install-recommends google-chrome-stable \ && rm -rf /var/lib/apt/lists/* # Install firefox RUN apt-get install -y --no-install-recommends firefox # Install python version 3.0+ RUN add-apt-repository universe RUN apt-get -y update && \ apt-get install -y --no-install-recommends python3.8 \ python3-pip && \ rm -rf /var/lib/apt/lists/* RUN mkdir app && mkdir drivers # Copy drivers directory and app module to the machine COPY app/requirements.txt /app/ # Upgrade pip and Install dependencies RUN pip3 install --upgrade pip \ -r /app/requirements.txt COPY app /app COPY drivers /drivers # Execute test ADD execute.sh . RUN chmod +x execute.sh ENTRYPOINT ["/bin/bash"] Seeding Test Data in the Database Seed data can be populated for a particular model or can be done using a migration script or a database dump. For example, Django has a single-line loader function that helps seeding data from a YML file. The script to seed the database can be written in a bash script and can be executed once every time a container is created. Take the following code blocks as examples. entrypoint.sh: #!/bin/bash set -e python manage.py loaddata maps/fixtures/country_data.yaml exec "$@" Dockerfile FROM python:3.7-slim RUN apt-get update && apt-get install RUN apt-get install -y libmariadb-dev-compat libmariadb-dev RUN apt-get update \ && apt-get install -y --no-install-recommends gcc \ && rm -rf /var/lib/apt/lists/* RUN python -m pip install --upgrade pip RUN mkdir -p /app/ WORKDIR /app/ COPY requirements.txt requirements.txt RUN python -m pip install -r requirements.txt COPY entrypoint.sh /app/ COPY . /app/ RUN chmod +x entrypoint.sh ENTRYPOINT ["/app/entrypoint.sh"] Setting up the Workflow Using Pipeline as Code Nowadays, it is easy to run builds and execute Docker from CI using Docker plugins. The best way to set up the workflow from CI is by using pipeline as code. A pipeline-as-code file specifies actions and stages for a CI pipeline to perform. Because every organization uses a version control system, changes in pipeline code can be tested in branches for the corresponding changes in the application to be deployed. The following code block is an example of pipeline as code. config.yml: steps: - label: ":docker: automation pipeline" env: VERSION: "$BUILD_ID" timeout_in_minutes: 60 plugins: - docker-compose#v3.7.0: run: test retry: automatic: - exit_status: "*" limit: 1 Checklist for Test Environment Setup Test data List of all the systems, modules, and applications to test Application under test access and valid credentials An isolated database server for the staging environment Tests across multiple browsers All documentation and guidelines required for setting up the environment and workflows Tool licenses, if required Automation framework implementation Development and Execution of Automated Tests To ensure test scripts run accordingly, the development of test scripts based on the test cases requires focusing on: Selection of the test cases Creating reusable functions Structured and easy scripts for increased code readability Peer reviews to check for code quality Use of reporting tools/libraries/dashboards Execution of Automated Tests in CI Figure 3 is a basic workflow that defines how a scalable automation process can work. In my experience, the very basic need to run a scalable automation script in the CI pipeline is met by using a trigger that would help set up the test dependencies within Docker and execute tests accordingly based on the need. Figure 3: Bird's eye view of automation process For example, a test pipeline may run a regression script, whereas another pipeline may run the API scripts. These cases can be handled from a single script that acts as the trigger to the test scripts. execute.sh: #!/bin/bash set -eu # Check if csv_reports, logs directory, html_reports, screenshots is present mkdir app/csv_reports app/logs mkdir app/html_reports/screenshots # Validate that if an argument is passed or not if [ $# -eq 0 ]; then echo "No option is passed as argument"; fi # Parse command line argument to run tests accordingly for i in "$@"; do case $i in --regression) pytest -p no:randomly app/test/ -m regression --browser firefox --headless true --html=app/html_reports/"$(date '+%F_%H:%M:%S')_regression".html --log-file app/logs/"$(date '+%F_%H:%M:%S')".log break ;; --smoke) pytest app/test -m smoke break ;; --sanity) pytest app/test -m sanity --browser chrome --headless true --html=app/html_reports/ sanity_reports.html --log-file app/logs/"$(date '+%F_%H:%M:%S')".log break ;; --apitest) npm run apitest break ;; --debug) pytest app/test -m debug --browser chrome --headless true --html=app/html_reports/ report.html --log-file app/logs/"$(date '+%F_%H:%M:%S')".log break ;; *) echo "Option not available" ;; esac done test_exit_status=$? exit $test_exit_status Analysis of Test Reports By analyzing test reports, testing teams are able to determine whether additional testing is needed, if the scripts used can accurately identify errors, and how well the tested application(s) can withstand challenges. Reports can be represented either using static HTML or dynamic dashboard. Dashboards can help stakeholders in understanding trends in the test execution by comparing the current data with the past data of execution. For example, allure reporting creates a concise dashboard with the test outcomes and represents it using data collected from test execution. Conclusion Automated testing lifecycle is a curated process that helps testing applications meet specific goals within appropriate timelines. Furthermore, it is very important for the QAOps process to gel properly with the SDLC and rapid application development. When completed correctly, the six phases of the lifecycle will achieve better outcomes and delivery. Additional Reading: Cloud-Based Automated Testing Essentials Refcard by Justin Albano "Introduction to App Automation for Better Productivity and Scaling" by Soumyajit Basu This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report
This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report Modern software applications are complex and full of many dynamic components that generate, collect, and fetch data from other components simultaneously. If any of these components acts unexpectedly or, worse, fails, there can be a cascading effect on all other dependent components. Depending on the nature of the software, these errors or failures can result in system downtime, financial loss, infrastructure collapse, safety implications, or even loss of life. This is why we test and monitor software. Testing with the right techniques and test cases at the right stages in the software lifecycle increases the chances of catching problems early and before users do. When and Where to Test Generally, tests occur in the "testing" stage of the software development lifecycle (SDLC). However, for certain types of tests, this is not the case, and when you implement and run, each test type can vary. Before we get into selecting the right test, let's quickly review when and where to use different types of tests. THE COMMON TYPES OF TESTS Test Type What It Identifies SDLC Stage Implementation Options Unit Unexpected or missing function input and output Development, testing Defined in code, typically with language libraries API and integration Integrations with third-party services Development, deployment, testing Defined in code, typically with language and other libraries needed for the integration UI Functional interactions with the user interfaces Testing Specialized testing frameworks Security Vulnerabilities and attack vectors Development, testing, deployment, maintenance Specialized testing frameworks Performance Key application metrics Deployment, maintenance Metric-dependent tools Smoke If an application still functions after a build Testing, deployment Specialized testing frameworks Regression If new code breaks old code Testing, deployment Specialized testing frameworks How To Choose Tests As with many technical projects, reading a list of recommendations and best practices is only the beginning, and it can be difficult to decide which of those apply to your use case. The best way is to introduce an example and show the reasoning behind how to decide a strategy based on that use case. It won't match any other use case exactly but can help you understand the thought process. Example Application I have a side project I am slowly trying to build into a full application. It's a to-do aggregator that pulls tasks assigned to me from a variety of external services and combines them into one easier-to-view list. It uses the APIs of each of these services to fetch assigned task data. Users can sort and filter the list and click on list items to see more details about the task. The application is written in TypeScript and React and uses material UI. Additionally, there are mobile and desktop versions created with React native. Essential Tests Unless you have a good reason not to include them, this section covers tests that are essential in an application test suite. Figure 1: Essential tests in the SDLC Unit Tests Essential for almost any application and possible to create as you build code, any application that has more than one functional component needs unit tests. The example application has one component that takes the data returned from the APIs and converts it to React objects ready for rendering in the UI. Some examples of unit tests in this example could be: Determining whether there are objects to render Checking if the objects have essential data items to render (for example, the title) Determining if the UI is ready for objects to be rendered to it As the application uses Typescript, there are many options available for writing unit tests, including Jest, Mocha, and Jasmine. They all have advantages and disadvantages to the ways they work, with no real "right answer" as to which is best. Jest is possibly the most popular at the moment and was created by Facebook to unit test React. The example application is based on React, so perfect! API and Integration Tests The example application relies heavily on multiple APIs that have multiple points of failure with the potential to render the application unusable if handled poorly. API and integration tests are not quite the same as each other. While API testing tests only API interactions, integration testing could test API tests but also other third-party integrations, such as external components. As the example application's only third-party integration are APIs, we can consider them the same. Some examples of API errors to test for could be: Expired or changes to authentication methods A call that returns no data A call that returns unexpected data Rate limiting on an API call API tests typically happen externally to the application code, in an external tool, or in a CI pipeline. Open-source options include writing your own tests that call the API endpoints, SoapUI (from the same people that define the API spec standard), Pact, and Dredd. Personally, I tend to use Dredd for CI tests, but there is no obvious choice with API testing. UI Tests If an application has a visual front end, that front end needs automated tests. These tests typically simulate interactions with the interface to check that they work as intended. The UI for the example application is simple but essential for user experience, so some example tests could include checking whether: Scrolling the list of aggregated tasks works Selecting a task from the list opens more details Tools for automated UI testing are typically run manually or as part of a CI process. Fortunately, there are a lot of mature options available, some of which run independently from the programming language and others as part of it. If your application is web-based, then generally, these tools use a "headless browser" to run tests in an invisible browser. If the project is a native application of some flavor, then UI testing options will vary. The example project is primarily web-based, so I will only mention those options, though there are certainly more available: Selenium – a long-running tool for UI testing and is well-supported Puppeteer – a mature UI testing tool designed for Node.js-based projects For the example application, I would select a tool that is well suited to TypeScript and React, and where tests are tightly coupled to the underlying UI components. Optional Tests This section deals with test types to consider if and when you have resources available. They will help improve the stability and overall user experience of your applications. Figure 2: Optional tests in the SDLC Security Security is a more pressing issue for applications than ever. You need to check for potentially vulnerable code during development and also the increasing problem of introducing vulnerabilities through package dependencies. Aside from testing, generating and maintaining lists of external packages for software supply chain reasons is a rapidly growing need, with possible regulatory requirements coming soon. Some examples of vulnerability issues to test for could be: Storing API credentials in plain text Sending API credentials unencrypted Using vulnerable packages There are two groups of tools for testing these requirements. Some handle scanning for vulnerabilities in your code and external code, while others handle one of those roles. Vulnerability scanning is a new growth business for many SaaS companies, but some popular open-source and/or free options include, but are not limited to, GitHub, Falco, and Trivy. These tools are programming-language independent, and your decision should be based on the infrastructure you use behind the application. The example application runs on a user's device locally, so the best time to run a vulnerability checker would be in CI and CD during the build process. Performance Tests There is little point in creating a finely crafted application without any kind of monitoring of how well it performs in the hands of users. Unlike most of the other test types on the list, which typically run at distinct phases in the SDLC, performance testing generally happens continuously. Some tools let you mock production usage with simulated load testing, and this section includes some of those, but they are still not the same as real users. Possible issues to monitor are: Speed of populating task results Critical errors, for example, API changes in between builds Slow UI responses As performance monitoring often needs a centralized service to collate and analyze application data, these tools tend to be commercial services. However, there are some open-source or free options, including k6 (for mocking), sending React <Profiler> data into something like Grafana, and Lighthouse CI. Smoke Tests A lot of other testing methods test individual functionality or components but not paths through how these fit together and how users use an application. Smoke tests typically use a quality assurance (QA) environment to check that key functionality works in new builds before progressing to further tests. Smoke tests can be manually undertaken by a QA team or with automated tools. The tool options depend on what it is you want to test, so many of the other tools featured in this article can probably help. For the example application, a smoke test would check that the list of aggregated tasks is generated. Regression Tests Regression testing isn't a set of tools but a best-practice way of grouping other tests to ensure that new features don't have an adverse effect on an application. For example, a new release adds the ability to change the status of tasks aggregated in the application, sending the status back to the source task. The other following tests would work together to ensure that introducing this new feature hasn't negatively affected the existing functionality, which was only to view aggregated tasks. Some examples of regression test grouping are the following: Unit tests for the new feature API tests for updating items on the relevant service API Security tests to ensure that calling the new APIs doesn't reveal any sensitive information Performance tests for the new feature and to check that the new feature doesn't affect reloading the task list Conclusion This article covered the many different types of tests an application can implement and the kinds of issues they can prevent. All of these issues have the potential to hinder user experience, expose users to security issues, and cause users not to want to use your application or service. As you add new features or significantly change existing features, you should write relevant tests and run them as frequently as convenient. In many modern SDLC processes, tests typically run whenever developers check in code to version control, which you should also do frequently. This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report
Justin Albano
Software Engineer,
IBM
Thomas Hansen
CTO,
AINIRO.IO
Soumyajit Basu
Senior Software QA Engineer,
Encora
Vitaly Prus
Head of software testing department,
a1qa