Profiling IT Complexity: A Blueprint for Effective Legacy System Testing
In the evolving realm of IT, complexity emerges as a formidable challenge, ranking high on the list of concerns. Learn about solutions.
Join the DZone community and get the full member experience.
Join For FreeIn the evolving realm of IT, complexity emerges as a formidable challenge, ranking high on the list of concerns. Solutions should be designed to function across multiple platforms and cloud environments, ensuring portability. They should seamlessly integrate with existing legacy systems while also being tailored to accommodate a wide range of potential scenarios and requirements. Visualizing the average IT landscape often results in a tapestry of applications, hardware, and intricate interdependencies.
The term "legacy systems" often conjures images of outdated software and obsolete hardware. Yet, beneath their seemingly archaic facade lies a critical piece of the IT puzzle that significantly influences the overall complexity of modern systems. These legacy systems, with their historical significance and enduring impact, continue to affect the way organizations manage and navigate their technological landscapes.
They usually come in the form of large monolithic systems that encompass a wide range of functionalities and features. As projects evolved, new code was added in a way that was increasingly difficult to understand and manage. It was added in a way that strong coupling between components was introduced, leading to a convoluted codebase. An update or change to one part of the system will affect other parts of the system, creating a dependency labyrinth. Other types of dependencies, like dependencies on third-party libraries, frameworks, and tools, make updates and changes even more complicated. Other characteristics of legacy systems may include outdated technologies, technological and technical debt, and limited to no documentation. Scaling, deploying, and testing legacy systems can be tricky. As the load increases, it's often necessary to scale the entire system, even if only a specific part requires more resources. This can lead to inefficient resource utilization, slower deployments, and slower scaling processes.
This article starts by examining key factors of IT complexity at an organizational decision level (vendor complexity, system integration complexity, consultants, and other internal dynamics) and mentality. We then move on to the implementation level of legacy systems. A blueprint is proposed for testing that is by no means exhaustive. It’s a minimal iterative testing process that can be tailored according to our needs.
Vendor Complexity
No matter how good a vendor is at software development, when the “build over buy” approach is adopted, building software introduces its own complexities. At best, as the software built in-house evolves, the complexity of maintaining it at a certain quality level can be manageable. At worst, the complexity may run out of control leading to unforeseen nightmares. As time passes, technologies and market needs evolve, and the systems developed may evolve into legacy systems.
Many enterprises opt for a "buy over build" approach, procuring a substantial portion of their software and hardware from external vendors rather than creating in-house solutions. This inclination toward enterprise-grade software comes at a significant cost, with licenses consuming large parts of IT budgets. Software vendors glean advantages from complexity in several ways:
Competing through extensive feature lists inadvertently fosters product complexity. The drive for features stems from the understanding that IT solutions are often evaluated based on the sheer presence of features, even if many of these features go unused.
Savvy vendors adeptly tout their products: "Already using an application performance management framework? Ours is superior and seamlessly integrates with your existing setup!" Consequently, the enterprise landscape becomes fraught with multiple products addressing the same issue, contributing to complexity.
Vendors introduce complexity into their product strategies, relentlessly coining new buzzwords and positioning improved solutions regularly to sustain their sales pipeline.
Hardware vendors, too, derive benefits from complexity, as they can offer hardware-based solutions to problems rather than advocating for streamlined software alternatives.
System Integration Complexity
System integrators play a pivotal role in building enterprise software or integrating purchased solutions. They contribute specialized skill sets that internal IT might lack. System integrators are service providers offering consultancy services led by experts. Although cloaked in various arrangements, the consultancy economics ultimately translate into hourly billing for these consultants, aligning with the fundamental unit of consulting: the staff hour.
More complexity invariably translates into more work and, consequently, increased revenue for system integrators.
Consultants' Influence
Consultants, often regarded as hired specialists, engage in solving intricate problems and shaping IT strategies. Yet, the imperative to remain employed sometimes may lead them to offer partial solutions, leaving room for further engagements. This behavior is termed "scoping" or "expectation management."
Other Internal Dynamics
In certain organizational contexts, the individual with a larger budget and greater operational complexity often garners more esteem. Paradoxically, complexity becomes a means of career advancement and ego elevation. Some IT managers may revel in showcasing the sophistication (i.e., complexity) of their operations.
Legacy Systems
To develop and maintain effectively any software system we need requirements with the following qualities:
- Documented: They must be written somewhere and should not just exist in our minds. Documentation may be as lightweight as possible as long as it’s easy to maintain and fulfills its purpose of being a single source of truth.
- Correct: We understand correctly what is required from the system and what is not required.
- Complete: There are no missing attributes or features.
- Understandable: It’s easy for all stakeholders to understand what they must do in order for the requirements to be fulfilled.
- Unambiguous: When we read the requirements, we can all understand the same thing.
- Consistent: The terminology is consistent with no contradictions.
- Testable: We must have an idea about how to test that the requirements are fulfilled.
- Traceable: We should be able to trace each requirement in code and tests.
- Viable: We should be able to implement them within the existing constraints (time, money, number of employees).
- Implementation independent: Requirements should only describe what should be done and not how.
Consider a software system that, during the course of development, has been hit with most, if not all, of the qualities listed above. There may be little to no documentation that is difficult to understand, incomplete, and full of contradictions. One requirement contradicts another. Terminology is inconsistent, the code was written with no testability in mind, and there exist a few outdated UI tests. The people who have developed this system are no longer with the company, and no current employee understands exactly what it does or how it works. Somehow, however, the company has managed to keep this legacy system running for years despite the high maintenance costs and its outdated technologies.
The problem with such a legacy system is that everyone is afraid to change anything about it. Small changes in the code may result in big failures. It could be code changes due to fixing bugs or glitches or due to developing and releasing new features. It could be changes like refactoring or improving performance and other non-functional issues.
In the following sections, we will depict a set of steps to guide testing endeavors for legacy systems. Our blueprint is generic enough to be applied to a number of different contexts. Keep in mind, though, that some steps and the tasks per step may vary depending on the context.
Step 1: Initial Assessment and Reconnaissance
- Try to understand the context. Determine why we are exploring the legacy system and what goals we aim to achieve overall. What is the definition of done for our testing efforts?
- Gather any available documentation. Collect existing documentation, even if outdated, to gain initial insights into the system's functionality and architecture.
- Identify stakeholders and ask questions. Identify individuals or teams who are currently or were previously involved with the system, including developers, product managers or analysts, business users, and technical support.
The goal of step 1 is to understand the context and scope of testing. Make sure to gather as much information as possible to be able to get started. Keep in mind that there exist emergent characteristics in software systems that can only be found by exploration. As long as we’ve got a couple of ideas of where to start and what to test, we could be ready to go to step 2. Here are three questions to ask stakeholders at this stage that could help us get started.
- What are your plans for the legacy system under test?
- What is the current ecosystem that it is used for?
- Who uses this legacy system in order to solve what problem?
Step 2: Explore by Pair Testing
To learn about a legacy system quickly, it’s best to work with other explorers and discuss our findings. More people can cover the same ground more quickly. Each person brings a unique perspective and skills to play and thus will notice things that others won’t. As a group, we will discover more in the same amount of time by working together than if we all explored different aspects of the system individually. By pooling our insights, we make more effective use of our time.
What to look for:
- Try to find out the core capabilities (if any). Systems exist to perform core capabilities, although, for some systems, that’s not always crystal clear. If not certain, try to answer question 3 from step 1. There may be multiple answers to that question that we need to explore.
- Try to find out the business rules (if any). For example, banking and accounting systems have rules that have to do with the balancing of accounts. Other industries like lottery and betting have a strict set of business rules that must be followed. Security, authorization, authentication, and single sign-on are also factors to look for.
- Try to identify the boundaries of the system under test. This may be challenging as the boundaries in legacy systems are usually blurry. Nevertheless, try to find out what the system does, how it interfaces with what, and when. How the pieces and parts all connect together.
The goal of step 2 is to understand at least the basics of the system’s behavior. We are done if we have a set of questions that can lead to fruitful discussions in step 3 and further exploration. As a rule of thumb, the more we can answer any of the following questions, the more ready we are for step 3.
- What functions does the system perform beyond its core capabilities?
- How does the system take input?
- How (or where) does it produce output?
- What makes a basic input or sequence of actions, and what corresponds to the resulting output or outcome?
- How is the output influenced by the environment or configuration?
- Are there any alternative means to interact with the system that circumvents the intended interfaces? (For instance, are there concealed configuration files that can be manipulated? Is data stored in a location accessible directly? Are individual components of the system accessible directly, bypassing the more commonly used public interface?)
- How can error conditions be triggered?
- What potential outcomes might arise from using the system in unintended ways or intentionally provoking error conditions?
Step 3: Seek More Information From Stakeholders
This is when we can check if we did a good job in steps 1 and 2. We should be in a position to start answering and asking questions. If we can answer no questions and if the answers to our questions lead to no further exploration, then we may need to go back to steps 1 or 2. Once we are sufficiently fluent in our understanding of how our legacy system works, we’re ready to interview stakeholders.
Questions to ask:
- Are there any alternatives? If yes, why would someone use this legacy system instead of the alternatives?
- Is there a sales pitch? If not, try to make a pitch in order to make me buy it.
- If nothing is working on the system, what would be the first thing that should work?
- If the legacy system is in telecommunications, aviation, finance, healthcare, defense, insurance, education, energy, public services, or other regulated industries, ask if there are industry standards that must be followed. Find out if one or more standards are not followed and why.
- How can we interact with the system under test at an API level? Any relevant documentation?
- Are we aware of any assumptions made when the system was built and the target ecosystems that the system was built for?
Step 4: Iterate and Refine
Once we gain information and insights about how the legacy system should work and how it should not work, we should end up with new use cases that lead to new test cases. Fixes to bugs that we may have found during our testing sessions may also introduce new problems that we need to be aware of. Extensive regression testing may be needed to address such problems, and if things work smoothly, such regression test suites are good candidates for automation testing.
Organizing work: We should document our findings in a lightweight manner and capture key insights, models, and observations for reference. Create tables illustrating rules and conditions that govern the system's behavior. As we learn more and refine our understanding, we should update our documentation. Our findings should be used to create a set of repeatable smoke or regression tests that represent the core functionalities of the system. As more and more functionality is uncovered, keep thinking and asking: What is core functionality and what is not?
Remember that this process is iterative. As we gain deeper insights, we may need to revisit previous steps and refine our approach. For example, new testing levels and types of testing may become necessary. A goal for each step is to gradually build a comprehensive understanding of the legacy system's behavior, uncover bugs, and develop strategies to mitigate them.
Wrapping Up
Legacy systems, while often deemed outdated, frequently coexist with cutting-edge technologies in modern enterprises. This interplay between the past and the present can create a labyrinth of integration challenges, requiring intricate compatibility and interoperability measures to ensure smooth operations. The legacy systems might not always seamlessly align with contemporary protocols and standards, necessitating workarounds, middleware, and custom interfaces. Consequently, this integration dance contributes significantly to the complexity of the overall IT landscape. We presented a high-to-low-level analysis of complexity in IT, and we also provided a blueprint for effective legacy system testing.
Opinions expressed by DZone contributors are their own.
Comments