The Agile methodology is a project management approach that breaks larger projects into several phases. It is a process of planning, executing, and evaluating with stakeholders. Our resources provide information on processes and tools, documentation, customer collaboration, and adjustments to make when planning meetings.
Should We Change Scrum?
Graph-Oriented Solutions Enhancing Flexibility Over Mutant Requirements
On September 5, 2023, we had the opportunity to listen to Bob Galen on “An Agile Coaches Guide to Storytelling.” In this session, Bob shared his experience coaching an Agile coach struggling to connect with a development manager. He underscored the transformative impact of incorporating personal narratives, lessons learned, teaching stories, and purpose or vision stories into coaching conversations. Moreover, Bob demonstrated the compelling power of storytelling in Agile coaching by using stories to share knowledge and wisdom while fostering dialogue. Watch the video now: An Agile Coaches Guide to Storytelling — Bob Galen at the 53. Hands-on Agile. Abstract “I’m going to tell you a story.” “I was coaching an Agile coach who lamented that they weren’t connecting with their coaching client, in this case, a development manager. I asked them to share a typical coaching conversation, and they spoke about a series of questions they asked that essentially went unanswered — leaving them and the client quite frustrated. I asked them what other coaching techniques they had tried, and it boiled down to only coaching stances and only open-ended questions.” “I suggested they experiment with weaving some stories into their coaching conversations. Personal stories, lesson-learned stories, teaching stories, purpose or vision stories, and relationship or connection-building stories. I spoke to them about sharing their knowledge and wisdom with the client via story, not dominating the conversation but augmenting it, and using the story as a backdrop to their questions and explorations with the client.” “They later told me that this small change significantly impacted their coaching after a bit of practice. This talk is about bringing the power of Storytelling INTO your Agile coaching and discovering the magic of the Story. Now, please share this story with others, and I hope to see you in the talk…” Watch the recording of Bob’s session now: During the Q&A, Bob also answered the following questions: How can you utilize stories in Agile coaching without being prescriptive? Any tips to get over the feeling of “people don’t want to hear about me and my stories; they just want facts”? When you are an introverted Scrum Master struggling with ad-hoc verbal creativity, how can you improve storytelling?
When someone mentions lead times in software delivery, it's often unclear whether they mean the definition of lead times from Lean Software Development, the one from DevOps, or something else entirely. In this post, I look at why there are so many definitions of lead time and how you can put them to use. Lead Time Definitions The DevOps definition of lead time for changes is the time between a developer committing code into version control and someone deploying that change to the production environment. This definition covers a smaller part of the software delivery process than the Lean definition. Mary and Tom Poppendieck created Lean Software Development based on the lean manufacturing movement, and they measured lead time from when you discover a requirement to when someone fulfills that requirement. The Lean movement, based on the Toyota Production System, defines lead time as the time between a customer placing an order and receiving their car. Lead Time Is a Customer Measurement All these lead times represent a customer measurement. But they differ because the customer is different. Toyota measured the system from the perspective of a car buyer. The Poppendiecks measured the software development system as the users see it. DevOps measures the deployment pipeline from the perspective of the developer as the customer. Lead time Customer Start End Toyota Production System Car Buyer Order Delivery Lean Software Development User Requirement Working software DevOps Developer Code commit Production deployment The key to successful lead time measurement is representing how the customer views the elapsed time. If you run a coffee shop, you might measure the time between a customer placing an order and handing them their coffee. You might consider a two-minute lead time to be good as your competitors take three minutes between the order and its fulfillment. However, your competitor is using a whole-system lead time, which starts when the customer joins the queue. They added another barista and reduced the queue from 15 minutes to seven. Their customers get coffee in ten minutes, but your customers have to wait 17 minutes (and you're losing customers who leave when they see the queue) Unless your lead time represents the customer's complete view of the system, you will likely optimize the wrong things. Cycle Times When you measure a part of the system, you're collecting a cycle time. In the car industry, it's useful to track how long it takes for a car to move along the production line. In software delivery, it's common to collect the cycle time from when a work item starts to when it's closed. This indicates the performance of software delivery without the varying wait times that can occur before work begins. As the coffee shop example shows, your customer doesn't care about cycle times. While you can use cycle times to measure different parts of the system to identify bottlenecks constraining the flow of work, you should always keep the complete system in mind. In software delivery, it's common to find a large proportion of elapsed time is due to work waiting in a queue. For example, a requirement that would take a few days to deliver might sit in a backlog for months, or a pull request may wait for approval for hours or even days. You can identify these delays by subdividing your system and measuring each part. Lead times measure the real output of a system, but cycle times help you find the system's constraint. All Measurements Are Useful Lead time is valuable because it represents the customer's perception. Identifying your customer and tracking lead times as they see them ensures any improvements you make impact their experience. If you make an improvement that doesn't reduce the lead time, you've optimized the wrong part of your system. In some cases, reducing the time for the wrong part of the system can even increase the overall lead time if it adds additional stress at the constraint. A constraint is a bottleneck that limits the speed of flow for the whole system. Resolving a constraint causes the bottleneck to move, so the process of identifying and resolving constraints is continuous. Software delivery represents a constraint to most organizations as technology is such a key competitive advantage. However, this isn't a granular enough identification to make improvements. You need to look at your software delivery value stream and make improvements where they increase the flow of work in the system. The Theory of Constraints, created by Eli Goldratt, tells us there's always at least one constraint in a system. Optimizing anywhere other than the constraint will fail to improve the performance of the whole system. Cycle times and other part-system timers help you work out where optimization is likely to reduce the overall lead time, so you can use cycle times and lead times together to assess the improvement. Common Software Delivery Constraints There are some common constraints in software delivery: Working in large batches. Pull request approval queues. Having too many branches or branches that have existed for too long. Manual testing Policy constraints, such as unnecessary approvals. Hand-offs between functional silos (such as development, testing, and operations.) Some of these constraints are reflected in the Continuous Delivery commit cycle, which has the following recommended timings: Commits every 15 minutes. Initial build and test feedback in five minutes. Any failures fixed or the change reverted after ten minutes. Conclusion The different definitions of lead time reflect various customer perceptions of parts of the same process. You can use as many measurements of lead and cycle times as you need to find and resolve constraints in your system. You can track the lead times over the long term and use cycle times temporarily as part of a specific improvement exercise. When you improve or optimize, lead time can help you understand if you're positively impacting the whole system. Happy deployments!
Estimating work is hard as it is. Using dates over story points as a deciding factor can add even more complications, as they rarely account for the work you need to do outside of actual work, like emails, meetings, and additional research. Dates are also harder to measure in terms of velocity making it harder to estimate how much effort a body of work takes even if you have previous experiences. Story points, on the other hand, can bring more certainty and simplify planning in the long run… If you know how to use them. What Are Story Points in Scrum? Story points are units of measurement that you use to define the complexity of a user story. In simpler words, you’ll be using a gradation of points from simple (smallest) to hardest (largest) to rank how long you think it would take to complete a certain body of work. Think of them as rough time estimates of tasks in an agile project. Agile teams typically assign story points based on three major factors: The complexity of work; The amount of work that needs to be done; And the uncertainty in how one could tackle a task. The less you know about how to complete something, the more time it will take to learn. How to Estimate a User Story With Story Points Ok, let’s take a good look at the elephant in the room: There’s no one cut and dry way of estimating story points. The way we do it in our team is probably different from your estimation method. That’s why I will be talking about estimations on a more conceptual level making sure anyone who’s new to the subject matter can understand the process as a whole and then fine-tune it to their needs. T-shirt size Story Point Time to deliver work XS 1 Minutes to 1-2 hours S 2 Half a day M 3 1-2 days L 5 Half a week XL 8 Around 1 week XXL 13 More than 1 week XXXL 21 Full Sprint Story point vs. T-shirt size Story Points of 1 and 2 Estimations that seem the simplest can sometimes be the trickiest. For example, if you’ve done something a lot of times and know that this one action shouldn’t take longer than 10-15 minutes, then you have a pretty clear one-pointer. That being said, the complexity of a task isn’t the only thing you need to consider. Let’s take a look at fixing a typo on a WordPress-powered website as an example. All you need to do is log into the interface, find the right page, fix the typo, and click publish. Sounds simple enough. But what if you need to do this multiple times on multiple pages? The task is still simple, but it takes a significantly longer amount of time to complete. The same can be said about data entry and other seemingly trivial tasks that can take a while simply due to the number of actions you’ll need to perform and the screens you’ll need to load. Story Point Estimation in Complex User Stories While seemingly simple stories can be tricky, the much more complex ones are probably even trickier. Think about it: If your engineers estimate, they’ll probably need half a week to a week to complete one story; there’s probably a lot they are still uncertain of in regards to implementation, meaning a story like that could take much longer. Then there’s the psychological factor where the team will probably go for the low-hanging fruits first and use the first half of the week to knock down the one, two, and three-pointers. This raises the risk of the five and eight-pointers not being completed during the Sprint. One thing you can do is ask yourself if the story really needs to be as complex as it is now? Perhaps it would be wiser to break it down. You can find out the answer to whether you should break a story using the KISS principle. KISS stands for “Keep It Simple, Stupid” and makes you wonder if something needs to be as complex as it is. Applying KISS is pretty easy too — just ask a couple of simple questions like what is the value of this story and if the same value can be achieved in a more convenient way. “Simplicity is the ultimate sophistication.” –Leonardo Da Vinci How to Use Story Points in Atlassian’s Jira A nice trick I like is to give the team the ability to assign story points to epics. Adding the story points field is nothing too in-depth or sophisticated as a project manager needs the ability to easily assign points when creating epics. The rule of thumb here is to indicate whether your development team is experienced and well-equipped to deliver the epic or whether they would need additional resources and time to research. An example of a simpler epic could be the development of a landing page and a more complex one would be the integration of ChatGPT into a product. The T-shirt approach works like a charm here. While Jira doesn’t have the functionality to add story points to epics by default, you can easily add a checkbox custom field to do the trick. Please note that you’ll need admin permissions to add and configure custom fields in Jira. Assigning story points to user stories is a bit trickier as — ideally — you’d like to take everyone’s experience and expertise into consideration. Why? A project manager can decide the complexity of an epic based on what the team has delivered earlier. Individual stories are more nuanced as engineers will usually have a more precise idea of how they’ll deliver this or that piece of functionality, which tools they’ll use and how long it’ll take. In my experience, T-shirt sizes don’t fit here as well as the Fibonacci sequence. The given sequence exhibits a recurring pattern in which each number is obtained by adding the previous two numbers in the series. The sequence begins with 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, and 89, and this pattern continues indefinitely. This sequence, known as the Fibonacci sequence, is utilized as a scoring scale in Fibonacci agile estimation. It aids in estimating the effort required for agile development tasks. This approach proves highly valuable as it simplifies the process by restricting the number of values in the sequence, eliminating the need for extensive deliberation on complexity nuances. This simplicity is significant because determining complexity based on a finite set of points is much easier. Ultimately, you have the option of selecting either 55 or 89, rather than having to consider the entire range between 55 and 89. As for the collaboration aspect of estimating and assigning story points to user stories, there’s a handy tool called Planning Poker. This handy tool helps the team collaborate on assigning story points to their issues. Here’s the trick: each team member anonymously assigns a value to an issue, keeping their choices incognito. Then, when the cards are revealed, it’s fascinating to see if the team has reached a consensus on the complexity of the task. If different opinions emerge, it’s actually a great opportunity for engaging in discussions and sharing perspectives. The best part is, this tool seamlessly integrates with Jira, making it a breeze to incorporate into your existing process. It’s all about making teamwork smoother and more efficient! How does the process of assigning story points work? Before the Sprint kicks off — during the Sprint planning session — the Scrum team engages in thorough discussions regarding the tasks at hand. All the stories are carefully reviewed, and story points are assigned to gauge their complexity. Once the team commits to a Sprint, we have a clear understanding of the stories we’ll be tackling and their respective point values, which indicate their significance. As the Sprint progresses, the team diligently works on burning down the stories that meet the Definition of Done by its conclusion. These completed stories are marked as finished. For any unfinished stories, they are returned to the backlog for further refinement and potential re-estimation. The team has the option to reconsider and bring these stories back into the current Sprint if deemed appropriate. When this practice is consistently followed for each sprint, the team begins to understand their velocity — a measure of the number of story points they typically complete within a Sprint — over time. It becomes a valuable learning process that aids in product management, planning, and forecasting future workloads. What Do You Do With Story Points? As briefly mentioned above — you burn them throughout the Sprint. You see, while story points are good practice for estimating the amount of work you put in a Sprint, Jira makes them better with Sprint Analytics showing you the amount of points you’ve actually burned through the Sprint and comparing it to the estimation. These metrics will help you improve your planning in the long run. Burndown chart: This report tracks the remaining story points in Jira and predicts the likelihood of completing the Sprint goal. Burnup chart: This report works as an opposite to the Burndown chart. It tracks the scope independently from the work done and helps agile teams understand the effects of scope change. Sprint report: This report analyses the work done during a Sprint. It is used to point out either overcommitment or scope creep in a Jira project. Velocity chart: This is a kind of bird’s eye view report that shows historic data of work completed from Sprint to Sprint. This chart is a nice tool for predicting how much work your team can reliably deliver based on previously burned Jira story points. Add Even More Clarity to Your Stories With a Checklist With a Jira Checklist, you have the ability to create practical checklists and checklist templates. They come in handy when you want to ensure accountability and consistency. This application proves particularly valuable when it comes to crafting and enhancing your stories or other tasks and subtasks. It allows you to incorporate explicit and visible checklists for the Definition of Done and Acceptance Criteria into your issues, giving you greater clarity and structure. It’s ultimately a useful tool for maintaining organization and streamlining your workflow with automation. Standardization isn’t about the process. It’s about helping people follow it.
Agile estimation plays a pivotal role in Agile project management, enabling teams to gauge the effort, time, and resources necessary to accomplish their tasks. Precise estimations empower teams to efficiently plan their work, manage expectations, and make well-informed decisions throughout the project's duration. In this article, we delve into various Agile estimation techniques and best practices that enhance the accuracy of your predictions and pave the way for your team's success. The Essence of Agile Estimation Agile estimation is an ongoing, iterative process that takes place at different levels of detail, ranging from high-level release planning to meticulous sprint planning. The primary objective of Agile estimation is to provide just enough information for teams to make informed decisions without expending excessive time on analysis and documentation. Designed to be lightweight, collaborative, and adaptable, Agile estimation techniques enable teams to rapidly adjust their plans as new information emerges or priorities shift. Prominent Agile Estimation Techniques 1. Planning Poker Planning Poker is a consensus-driven estimation technique that employs a set of cards with pre-defined numerical values, often based on the Fibonacci sequence (1, 2, 3, 5, 8, 13, etc.). Each team member selects a card representing their estimate for a specific task, and all cards are revealed simultaneously. If there is a significant discrepancy in estimates, team members deliberate their reasoning and repeat the process until a consensus is achieved. 2. T-Shirt Sizing T-shirt sizing is a relative estimation technique that classifies tasks into different "sizes" according to their perceived complexity or effort, such as XS, S, M, L, and XL. This method allows teams to swiftly compare tasks and prioritize them based on their relative size. Once tasks are categorized, more precise estimation techniques can be employed if needed. 3. User Story Points User story points serve as a unit of measurement to estimate the relative effort required to complete a user story. This technique entails assigning a point value to each user story based on its complexity, risk, and effort, taking into account factors such as workload, uncertainty, and potential dependencies. Teams can then use these point values to predict the number of user stories they can finish within a given timeframe. 4. Affinity Estimation Affinity Estimation is a technique that involves grouping tasks or user stories based on their similarities in terms of effort, complexity, and size. This method helps teams quickly identify patterns and relationships among tasks, enabling them to estimate more efficiently. Once tasks are grouped, they can be assigned a relative point value or size category. 5. Wideband Delphi The Wideband Delphi method is a consensus-based estimation technique that involves multiple rounds of anonymous estimation and feedback. Team members individually provide estimates for each task, and then the estimates are shared anonymously with the entire team. Team members discuss the range of estimates and any discrepancies before submitting revised estimates in subsequent rounds. This process continues until a consensus is reached. Risk Management in Agile Estimation Identify and Assess Risks Incorporate risk identification and assessment into your Agile estimation process. Encourage team members to consider potential risks associated with each task or user story, such as technical challenges, dependencies, or resource constraints. By identifying and assessing risks early on, your team can develop strategies to mitigate them, leading to more accurate estimates and a smoother project execution. Assign Risk Factors Assign risk factors to tasks or user stories based on their level of uncertainty or potential impact on the project. These risk factors can be numerical values or qualitative categories (e.g., low, medium, high) that help your team prioritize tasks and allocate resources effectively. Incorporating risk factors into your estimates can provide a more comprehensive understanding of the work involved and help your team make better-informed decisions. Risk-Based Buffering Include risk-based buffering in your Agile estimation process by adding contingency buffers to account for uncertainties and potential risks. These buffers can be expressed as additional time, resources, or user story points, and they serve as a safety net to ensure that your team can adapt to unforeseen challenges without jeopardizing the project's success. Monitor and Control Risks Continuously monitor and control risks throughout the project lifecycle by regularly reviewing your risk assessments and updating them as new information becomes available. This proactive approach allows your team to identify emerging risks and adjust their plans accordingly, ensuring that your estimates remain accurate and relevant. Learn From Risks Encourage your team to learn from the risks encountered during the project and use this knowledge to improve their estimation and risk management practices. Conduct retrospective sessions to discuss the risks faced, their impact on the project, and the effectiveness of the mitigation strategies employed. By learning from past experiences, your team can refine its risk management approach and enhance the accuracy of future estimates. By incorporating risk management into your Agile estimation process, you can help your team better anticipate and address potential challenges, leading to more accurate estimates and a higher likelihood of project success. This approach also fosters a culture of proactive risk management and continuous learning within your team, further enhancing its overall effectiveness and adaptability. Best Practices for Agile Estimation Foster Team Collaboration Efficient Agile estimation necessitates input from all team members, as each individual contributes unique insights and perspectives. Promote open communication and collaboration during estimation sessions to ensure everyone's opinions are considered and to cultivate a shared understanding of the tasks at hand. Utilize Historical Data Draw upon historical data from previous projects or sprints to inform your estimations. Examining past performance can help teams identify trends, patterns, and areas for improvement, ultimately leading to more accurate predictions in the future. Velocity and Capacity Planning Incorporate team velocity and capacity planning into your Agile estimation process. Velocity is a measure of the amount of work a team can complete within a given sprint or iteration, while capacity refers to the maximum amount of work a team can handle. By considering these factors, you can ensure that your estimates align with your team's capabilities and avoid overcommitting to work. Break Down Large Tasks Large tasks or user stories can be challenging to estimate accurately. Breaking them down into smaller, more manageable components can make the estimation process more precise and efficient. Additionally, this approach helps teams better understand the scope and complexity of the work involved, leading to more realistic expectations and improved planning. Revisit Estimates Regularly Agile estimation is a continuous process, and teams should be prepared to revise their estimates as new information becomes available or circumstances change. Periodically review and update your estimates to ensure they remain accurate and pertinent throughout the project lifecycle. Acknowledge Uncertainty Agile estimation recognizes the inherent uncertainty in software development. Instead of striving for flawless predictions, focus on providing just enough information to make informed decisions and be prepared to adapt as necessary. Establish a Baseline Create a baseline for your estimates by selecting a well-understood task or user story as a reference point. This baseline can help teams calibrate their estimates and ensure consistency across different tasks and projects. Pursue Continuous Improvement Consider Agile estimation as an opportunity for ongoing improvement. Reflect on your team's estimation accuracy and pinpoint areas for growth. Experiment with different techniques and practices to discover what works best for your team and refine your approach over time. Conclusion Agile estimation is a vital component of successful Agile project management. By employing the appropriate techniques and adhering to best practices, teams can enhance their ability to predict project scope, effort, and duration, resulting in more effective planning and decision-making. Keep in mind that Agile estimation is an iterative process, and teams should continuously strive to learn from their experiences and refine their approach for even greater precision in the future.
Beyond Unit Testing Test-driven development (TDD) is a well-regarded technique for an improved development process, whether developing new code or fixing bugs. First, write a test that fails, then get it to work minimally, then get it to work well; rinse and repeat. The process keeps the focus on value-added work and leverages the test process as a challenge to improving the design being tested rather than only verifying its behavior. This, in turn, also improves the quality of your tests, which become a more valued part of the overall process rather than a grudgingly necessary afterthought. The common discourse on TDD revolves around testing relatively small, in-process units, often just a single class. That works great, but what about the larger 'deliverable' units? When writing a microservice, it's the services that are of primary concern, while the various smaller implementation constructs are simply enablers for that goal. Testing of services is often thought of as outside the scope of a developer working within a single codebase. Such tests are often managed separately, perhaps by a separate team, using different tools and languages. This often makes such tests opaque and of lower quality and adds inefficiencies by requiring a commit/deploy as well as coordination with a separate team. This article explores how to minimize those drawbacks with test-driven development (TDD) principles applied at the service level. It addresses the corollary that such tests would naturally overlap with other API-level tests, such as integration tests, by progressively leveraging the same set of tests for multiple purposes. This can also be framed as a practical guide to shift-left testing from a design as well as implementation perspective. Service Contract Tests A Service Contract Test (SCT) is a functional test against a service API (black box) rather than the internal implementation mechanisms behind it (white box). In their purest form, SCTs do not include subversive mechanisms such as peeking into a database to verify results or rote comparisons against hard-coded JSON blobs. Even when run wholly within the same process, SCTs can loop back to localhost against an embedded HTTP server such as that available in Spring Boot. By limiting access through APIs in this manner, SCTs are agnostic as to whether the mechanisms behind the APIs are contained in the same or a different process(es), while all aspects of serialization/deserialization can be tested even in the simplest test configuration. The general structure of an SCT is: Establish a starting state (preferring to keep tests self-contained) One or more service calls (e.g., testing stateful transitions of updates followed by reads) Deep verification of the structural consistency and expected behavior of the results from each call and across multiple calls Because of the level they operate, SCTs may appear to be more like traditional integration tests (inter-process, involving coordination across external dependencies) than unit tests (intra-process operating wholly within a process space), but there are important differences. Traditional integration test codebases might be separated physically (separate repositories), by ownership (different teams), by implementation (different language and frameworks), by granularity (service vs. method focus), and by level of abstraction. These aspects can lead to costly communication overhead, and the lack of observability between such codebases can lead to redundancies, gaps, or problems tracking how those separately-versioned artifacts relate to each other. With the approach described herein, SCTs can operate at both levels, inter-process for integration-test level comprehensiveness as well as intra-process as part of the fast edit-compile-test cycle during development. By implication, SCTs operating at both levels Co-exist in the development codebase, which ensures that committed code and tests are always in lockstep Are defined using a uniform language and framework(s), which lowers the barriers to shared understanding and reduces communication overhead Reduce redundancy by enabling each test to serve multiple purposes Enable testers and developers to leverage each other’s work or even (depending on your process) remove the need for the dev/tester role distinction to exist in the first place Faking Real Challenges The distinguishing challenge to testing at the service level is the scope. A single service invocation can wind through many code paths across many classes and include interactions with external services and databases. While mocks are often used in unit tests to isolate the unit under test from its collaborators, they have downsides that become more pronounced when testing services. The collaborators at the service testing level are the external services and databases, which, while fewer in number than internal collaboration points, are often more complex. Mocks do not possess the attributes of good programming abstractions that drive modern language design; there is no abstraction, no encapsulation, and no cohesiveness. They simply exist in the context of a test as an assemblage of specific replies to specific method invocations. When testing services, those external collaboration points also tend to be called repeatedly across different tests. As mocks require a precise understanding and replication of collaborator requests/responses that are not even in your control, it is cumbersome to replicate and manage that malleable know-how across all your tests. A more suitable service-level alternative to mocks is fakes, which are an alternative form of test double. A fake object provides a working, stateful implementation of its interface with implementation shortcuts, making it not suitable for production. A fake, for example, may lack actual persistence while otherwise providing a fully (or mostly, as deemed necessary for testing purposes) functionally consistent representation of its 'real' counterpart. While mocks are told how to respond (when you see exactly this, do exactly that), fakes know themselves how to behave (according to their interface contract). Since we can make use of the full range of available programming constructs, such as classes, when building fakes, it is more natural to share them across tests as they encapsulate the complexities of external integration points that need not then be copied/pasted throughout your tests. While the unconstrained versatility of mocks does, at times, have its advantages, the inherent coherence, and shareability of fakes make them appealing as the primary implementation vehicle for the complexity behind SCTs. Alternately Configured Tests (ACTs) Being restricted to an appropriately high level of API abstraction, SCTs can be agnostic about whether fake or real integrations are running underneath. The same set of service contract tests can be run with either set. If the integrated entities, here referred to as task objects (because they often can be run in parallel as exemplified here), are written without assuming particular implementations of other task objects (in accordance with the "L" and "D" principles in SOLID), then different combinations of task implementations can be applied for any purpose. One configuration can run all fakes, another with fakes mixed with real, and another with all real. These Alternately Configured Tests (ACTs) suggest a process, starting with all fakes and moving to all real, possibly with intermediate points of mixing and matching. TDD begins in a walled-off garden with the 'all fakes' configuration, where there is no dependence on external data configurations and which runs fast because it is operating in process. Once all SCTs pass in that test configuration, subsequent configurations are run, each further verifying functionality while having only to focus on the changed elements with respect to the previous working test configuration. The last step is to configure as many “real” task implementations as required to match the intended level of integration testing. ACTs exist when there are at least two test configurations (color code red and green in the diagram above). This is often all that is needed, but at times, it can be useful in order to provide a more incremental sequence from the simplest to the most complex configuration. Intermediate test configurations might be a mixture of fake and real or semi-real task implementations that hit in-memory or containerized implementations of external integration points. Balancing SCTs and Unit Testing Relying on unit tests alone for test coverage of classes with multiple collaborators can be difficult because you're operating at several levels removed from the end result. Coverage tools tell you where there are untried code paths, but are those code paths important, do they have more or less no impact, and are they even executed at all? High test coverage does not necessarily equal confidence-engendering test coverage, which is the real goal. SCTs, in contrast, are by definition always relevant to and important for the purpose of writing services. Unit tests focus on the correctness of classes, while SCTs focus on the correctness of your API. This focus necessarily drives deep thinking about the semantics of your API, which in turn can drive deep thinking about the purpose of your class structure and how the individual parts contribute to the overall result. This has a big impact on the ability to evolve and change: tests against implementation artifacts must be changed when the implementation changes, while tests against services must change when there is a functional service-level change. While there are change scenarios that favor either case, refactoring freedom is often regarded as paramount from an agile perspective. Tests encourage refactoring when you have confidence that they will catch errors introduced by refactoring, but tests can also discourage refactoring to the extent that refactoring results in excessive test rework. Testing at the highest possible level of abstraction makes tests more stable while refactoring. Written at the appropriate level of abstraction, the accessibility of SCTs to a wider community (quality engineers, API consumers) also increases. The best way to understand a system is often through its tests; since those tests are expressed in the same API used by its consumers, they can not only read them but also possibly contribute to them in the spirit of Consumer Driven Contracts. Unit tests, on the other hand, are accessible only to those with deep familiarity with the implementation. Despite these differences, it is not a question of SCTs vs. unit tests, one excluding the other. They each have their purpose; there is a balance between them. SCTs, even in a test configuration with all fakes, can often achieve most of the required code coverage, while unit testing can fill in the gaps. SCTs also do not preclude the benefits of unit testing with TDD for classes with minimal collaborators and well-defined contracts. SCTs can significantly reduce the volume of unit tests against classes without those characteristics. The combination is synergistic. SCT Data Setup To fulfill its purpose, every test must work against a known state. This can be a more challenging problem for service tests than for unit tests since those external integration points are outside of the codebase. Traditional integration tests sometimes handle data setup through an out-of-band process, such as database seeding with automated or manual scripts. This makes tests difficult to understand without having to hunt down that external state or external processes and is subject to breaking at any time through circumstances outside your control. If updates are involved, care must be taken to reset or restore the state at the test start or end. If multiple users happen to run the tests at the same time, care must be taken to avoid update conflicts. A better approach tests that independently set up (and possibly tear down) their own non-conflicting (with other users) target state. For example, an SCT that tests the filtered retrieval of orders would first create an order with a unique ID and with field values set to the test's expectations before attempting to filter on it. Self-contained tests avoid the pitfalls of shared, separately controlled states and are much easier to read as well. Of course, direct data setup is not always directly possible since a given external service might not provide the mutator operations needed for your test setup. There are several ways to handle this: Add testing-only mutator operations. These might even go to a completely different service that isn't otherwise required for production execution. Provide a mixed fake/real test configuration using fakes for the update-constrained external service(s), then employ a mechanism to skip such tests for test configurations where those fake tasks are not active. This at least tests the real versions of other tasks. Externally pre-populated data can still be employed with SCTs and can still be run with fakes, provided those fakes expose equivalent results. For tests whose purpose is not actually validating updates (i.e., updates are only needed for test setup), this at least avoids any conflicts with multiple simultaneous test executions. Providing Early Working Services A test-filtering mechanism can be employed to only run tests against select test configurations. For example, a given SCT may initially work only against fakes but not against other test configurations. That restricted SCT can be checked into your code repository, even though it is not yet working across all test configurations. This orients toward smaller commits and can be useful for handing off work between team members who would then make that test work under more complex configurations. Done right, the follow-on work need only be focused on implementing the real task that doesn’t break the already-working SCTs. This benefit can be extended to API consumers. Fakes can serve to provide early, functionally rich implementations of services without those consumers having to wait for a complete solution. Real-task implementations can be incrementally introduced with little or no consumer code changes. Running Remote Because SCTs are embedded in the same executable space as your service code under test, all can run in the same process. This is beneficial for the initial design phases, including TDD, and running on the same machine provides a simple way for execution, even at the integration test level. Beyond that, it can sometimes be useful to run both on different machines. This might be done, for example, to bring up a test client against a fully integrated running system in staging or production, perhaps also for load/stress testing. An additional use case is for testing backward compatibility. A test client with a previous version of SCTs can be brought up separately from and run against the newer versioned server in order to verify that the older tests still run as expected. Within an automated build/test pipeline, several versions can be managed this way: Summary Service Contract Tests (SCTs) are tests against services. Alternatively, Configured Tests (ACTs) define multiple test configurations that each provide a different task implementation set. A single set of SCTs can be run against any test configuration. Even though SCT can be run with a test configuration that is entirely in process, the flexibility offered by ACTs distinguishes them from traditional unit/component tests. SCTs and unit tests complement one another. With this approach, Test Driven Development (SCT) can be applied to service development. This begins by creating SCTs against the simplest possible in-process test configuration, which is usually also the fastest to run. Once those tests have passed, they can be run against more complex configurations and ultimately against a test configuration of fully 'real' task implementations to achieve the traditional goals of integration or end-to-end testing. Leveraging the same set of SCTs across all configurations supports an incremental development process and yields great economies of scale.
As a Product Owner (PO), your role is crucial in steering an agile project toward success. However, it's equally important to be aware of the pitfalls that can lead to failure. It's worth noting that the GIGO (Garbage In - Garbage Out) effect is a significant factor: No good product can come from bad design. On Agile and Business Design Skills Lack of Design Methodology Awareness One of the initial steps towards failure is disregarding design methodologies such as Story Mapping, Event Storming, Impact Mapping, or Behavioral Driven Development. Treating these methodologies as trivial or underestimating their complexity or power can hinder your project's progress. Instead, take the time to learn, practice, and seek coaching in these techniques to create well-defined business requirements. For example, I once worked on a project where the PO practiced Story Mapping without even involving the end-users... Ignoring Domain Knowledge Neglecting to understand your business domain can be detrimental. Avoid skipping internal training sessions, Massive Open Online Courses (MooCs), and field observation workshops. Read domain reference books and, more generally, embrace domain knowledge to make informed decisions that resonate with both end-users and stakeholders. To continue with the previous example, the PO who was new in the project domain field (although having basic knowledge) missed an entire use-case with serious architectural implications due to a lack of skills, requiring significant software changes after only a few months. Disregarding End-User Feedback Overestimating your understanding and undervaluing end-user feedback can lead to the Dunning-Kruger effect. Embrace humility and actively involve end-users in the decision-making process to create solutions that truly meet their needs. Failure to consider real-world user constraints and work processes can lead to impractical designs. Analyze actual and operational user experiences, collect feedback, and adjust your approach accordingly. Don't imagine their requirements and issues, but ask actual users who deal with real-world complexity all the time. For instance, a PO I worked with ignored or postponed many obvious GUI issues from end-users, rendering the application nearly unusable. These UX issues included the absence of basic filters on screens, making it impossible for users to find their ongoing tasks. These issues were yet relatively simple to fix. Conversely, this PO pushed unasked-for features and even features rejected by most end-users, such as complex GUI locking options. Furthermore, any attempt to set up tools to collect end-user feedback was dismissed. Team Dynamics Centralized Decision-Making Isolating decision-making authority within your hands without consulting IT or other designers can stifle creativity and collaboration. Instead, foster open communication and involve team members in shaping the project's direction. The three pillars of agility, as defined in the Agile Manifesto, are Transparency, Inspection, and Adaptation. The essence of an agile team is continuous improvement, which becomes challenging when a lack of trust hinders the identification of real issues. Some POs unfortunately adopt a "divide and rule" approach, which keeps knowledge and power in their sole hands. I have observed instances where POs withheld information or even released incorrect information to both end-users and developers, and actively prevented any exchange between them. Geographical Disconnection Geographically separating end-users, designers, testers, PO and developers can hinder communication. Leverage modern collaboration tools, but don't rely solely on them. Balance digital tools with face-to-face interactions to maintain strong team connections and enables osmotic communication, which has proven to be highly efficient in keeping everyone informed and involved. The worst case I had to deal with was a project where developers were centralized in the same building as the end-users, while the PO and design team were distributed in another city. Most workshops were done remotely between both cities. In the end, the design result was very poor. It improved drastically when some designers were finally collocated with the end-users (and developers) and were able to conduct in situ formal and informal workshops. Planning and Execution Over-Optimism and Lack of Contingency Plans Hope should not be your strategy. Don't overselling features to end-users. Being overly optimistic and neglecting backup plans can lead to missed deadlines and unexpected challenges. Develop robust contingency plans (Plan B) to navigate uncertainties effectively. Avoid promising unsustainable plans to stakeholders. After two or three delays, they may lose trust in the project. I worked on a project where the main release was announced to stakeholders by the PO every two months over a 1.5-year timeline without consulting the development team. As you can imagine, the effect was devastating over the image of the project. Inadequate Stakeholder Engagement Excluding business stakeholders from demos and delaying critical communications can lead to misunderstandings and misaligned expectations. Regularly engage stakeholders to maintain transparency and gather valuable feedback. As an illustration, in a previous project, we conducted regular sprint demos; however, we failed to invite end-users to most sessions. Consequently, significant ergonomic issues went unnoticed, resulting in a substantial loss of time. Additionally, within the same project, the Product Owner (PO) organized meetings with end-users mainly to present solutions via fully completed mockups, rather than facilitating discussions to precisely identify operational requirements, which inhibited them. Embracing Waterfall Practices Thinking in terms of a waterfall approach, rather than embracing iterative development, can hinder progress, especially on a project meant to be managed with agile methodologies. Minimize misunderstandings by providing regular updates to stakeholders. Break features into increments, leverage Proof of Concepts (POC), and prioritize the creation of Minimal Viable Products (MVP) to validate assumptions and ensure steady progress. As an example, I recently had a meeting with end-users explaining that a one-year coding tunnel period resulted in a first application version almost unusable and worse than the 20-year-old application we were supposed to rewrite. With re-established communication and end-users' involvement, this has been fixed in a few months. Producing Too Much Waste As a designer, avoid creating a large stock of user stories (US) that will be implemented in months or years. This way, you work against the Lean principle to fight the overproduction muda (waste) and you produce many specifications at the worst moment (when knowing the least about actual business requirements), and this work has all chances to be thrown away. I had an experience where a PO and their designer team wrote US until one year before they were actually coded and left almost unmaintained. As expected, most of it was thrown away or, even worse, caused various flaws and misunderstandings among the development team when finally planned for the next sprint. Most backlog refinements and explanations had to be redone. User stories should be refined to a detailed state only one or two sprints before being coded. However, it's a good practice to fill the backlog sandbox with generally outlined features. The rule of thumb is straightforward: user stories should be detailed as close to the coding stage as possible. When they are fully detailed, they are ready for coding. Otherwise, you are likely to waste time and resources. Volatile Objectives Try to set consistent objectives at each sprint. Avoid context switching among developers, which can prevent them from starting many different features but never finishing any. To provide an example, in a project where the Product Owner (PO) interacted with multiple partners, priorities were altered every two or three sprints mainly due to political considerations. This was often done to appease the most frustrated partners who were awaiting certain features (often promised with unrealistic deadlines). Lack of Planning Flexibility Utilize the DevOps methodology toolkit, including tools such as feature flags, dark deployments, and canary testing, to facilitate more streamlined planning and deployment processes. As an architect, I once had a tough time convincing a PO to use canary-testing deployment strategy to learn fast and release early while greatly limiting risks. After a resounding failure when opening the application to the entire population, we finally used canary-testing and discovered performance and critical issues on a limited set of voluntary end-users. It is now a critical aspect of the project management toolkit we use extensively. Extended Delays Between Deployments Even if a product is built incrementally within 2 or 3-week timeframes, many large projects (including all those I've been a part of) tend to wait for several iterations before deploying the software in production. This presents a challenge because each iteration should ideally deliver some form of value, even if it's relatively small, to end-users. This approach aligns with the mantra famously advocated by Linus Torvalds: "Release early, release often." Some Product Owners (PO) are hesitant to push iterations into production, often for misguided reasons. These concerns can include fears of introducing bugs (indicating a lack of automated and acceptance testing), incomplete iterations (highlighting issues with user story estimation or development team velocity), a desire to provide end-users with a more extensive set of features in one go, thinking they'll appreciate it, or an attempt to simplify the user learning curve (revealing potential user experience (UX) shortcomings). In my experience, this hesitation tends to result in the accumulation of various issues, such as bugs or performance problems. Design Considerations Solution-First Mentality Prioritizing solutions over understanding the business needs can lead to misguided decisions. Focus on the "Why" before diving into the "How" to create solutions that truly address user requirements. As a bad practice, I've seen user stories including technical content (like SQL queries) or presenting detailed technical operations or screens as business rules. Oversized User Stories Designing large, complex user stories instead of breaking them into manageable increments can lead to confusion and delays. Embrace smaller, more focused user stories to facilitate smoother development, predictability in planning, and testing. Inexperienced Product Owners (POs) often find it challenging to break down features into small, manageable user stories (US). This is sort of an art, and there are numerous ways to accomplishing it based on the context. However, it's important to remember that each story should deliver value to end-users. As an example, in a previous project, the Product Owner (PO) struggled to effectively divide stories or engaged in purely technical splitting, such as creating one user story (US) for the frontend and another for the backend portion of a substantial feature. Consequently, 50% of the time, this resulted in incomplete user stories that required rescheduling for the subsequent sprint. Neglecting Expertise Avoiding consultation with experts such as UX designers, accessibility specialists, and legal advisors can result in suboptimal solutions. Leverage their insights to create more effective and user-friendly designs. As a case in point, I've observed multiple projects where the lack of a proper user experience (UX) led to inadequately designed graphical user interfaces (GUIs), incurring substantial costs for rectification at a later stage. In specific instances, certain projects demanded legal expertise, particularly in matters of data privacy. Moreover, I encountered a situation where a Product Owner (PO) failed to involve legal specialists, resulting in the final product omitting crucial legal notices or even necessitating significant architectural revisions. Ignoring Performance Considerations Neglecting performance constraints, such as displaying excessive data on screens without filters, can negatively impact user experience. Prioritize efficient design to ensure optimal system performance. I once worked on a large project where the Product Owner (PO) requested the computation of a Gantt chart involving tens of thousands of tasks spanning over 5 years. Ironically, in 99.9% of cases, a single week was sufficient. This unnecessarily intricate requirement significantly complicated the design process and resulted in the product becoming nearly unusable due to its excessive slowness. Using the Wrong Words Failing to establish a shared business language and glossary can create confusion between technical and business teams. Embrace the Ubiquitous Language (UL) Domain-Driven Design principle to enhance communication and clarity. I once worked on a project where PO and designers didn't set up any business terms glossary, used custom vocabulary instead of a business one, and used fuzzy or interchangeable synonyms even for the terms they coined themselves. This created many issues and confusion among the team or end-users and even duplicated work. Postponing Legal and Regulatory Considerations Late discovery of legal, accessibility, or regulatory requirements can lead to costly revisions. Incorporate these considerations early to avoid setbacks during development. I observed a significantly large project where the Social Security number had to be eliminated later on. This led to the need for additional transformation tools since this constraint was not taken into account from the beginning. Code Considerations Interferences Refine business requirements and don't interfere with code organization, which often has its own constraints. For instance, asking the development team to always enforce the reuse (DRY) principle through very generic interfaces comes from a good intention but may greatly overcomplicate the code (which violates the KISS principle). In a recent project, a Product Owner (PO) who had a background in development frequently complicated the design by explicitly instructing developers to extend existing endpoints or SQL queries instead of creating entirely new ones, which would have been simpler. Many developers followed the instructions outlined in the user stories (US) without fully grasping the potential drawbacks in the actual implementation. This occasionally resulted in convoluted code and wasted time rather than achieving efficiency gains. Acceptance Testing Neglecting Alternate Paths Focusing solely on nominal cases (“happy paths”) and ignoring real-world scenarios can result in very incomplete testing. Ensure that all possible paths, including corner cases, are thoroughly tested to deliver a robust solution. In a prior project, a multitude of bugs and crashes surfaced exclusively during the production phase due to testing being limited to nominal scenarios. This led to team disorganization as urgent hotfixes had to be written immediately, tarnishing the project's reputation and incurring substantial costs. Missing Acceptance Criteria Leverage the Three Amigos principle to involve cross-functional team members in creating comprehensive acceptance criteria. Incorporate examples in user stories to clarify expectations and ensure consistent understanding. Example mapping is a great workshop to achieve it. Being able to write down examples ensures many things: firstly that you have at least one realistic case for this requirement and that it is not imaginary; secondly, listing different cases is a powerful tool to gain an estimation of the alternate paths exhaustively (see the previous point) and make them emerge; lastly, it is one of the best common understanding material you can share with developers. By way of illustration, when designers began documenting real-life scenarios using Behavioral Driven Development (BDD) executable specifications, numerous alternate paths emerged naturally. This led to a reduction in production issues (as discussed in the previous section) and a gradual slowdown in their occurrence. Lack of Professional Testing Expertise Incorporating professional testers and testing tools enhances defect detection and overall quality. Invest in thorough testing to identify issues early, ensuring a smoother user experience. Not using tools also makes it more difficult for external stakeholders to figure out what has been actually tested. Conducting rigorous testing is indeed a genuine skill. In a previous project, I witnessed testers utilizing basic spreadsheets to record and track testing scenarios. This approach rendered it difficult to accurately determine what had been tested and what hadn't. Consequently, the Product Owner (PO) had to validate releases without a clear understanding of the testing coverage. Tools like the Open Source SquashTM are excellent for specifying test requirements and monitoring acceptance tests coverage. Furthermore, the testers were not testing professionals but rather designers, which frequently resulted in challenges when trying to obtain detailed bug reports. These reports lacked precision, including crucial information such as the exact time, logs, scenarios, and datasets necessary for effective issue reproduction. Take-Away Summary Symptom Possible Causes and Solutions A solution that is not aligned with end-users' needs. Ineffective Workshops with End-Users:- If workshops are conducted remotely, consider organizing them onsite.- Ensure you are familiar with agile design methods like Story Mapping.Insufficient Attention to End-Users' Needs:- Make sure to understand the genuine needs and concerns of end-users, and avoid relying solely on personal intuitions or managerial opinions.- Gather end-users' feedback early and frequently.- Utilize appropriate domain-specific terminology (Ubiquitous Language). Limited Trust from End-Users and/or Development Team. Centralized Decision-Making:- Foster open communication and involve team members in shaping the project's direction.- Enhance transparency through increased communication and information sharing.Unrealistic Timelines:- Remember that "Hope is not a strategy"; avoid excessive optimism.- Aim for consistent objectives in each sprint and establish a clear trajectory.- Employ tools that enhance schedule flexibility and ensure secure production releases, such as canary testing. Design Overhead. User story overproduction:- Minimize muda (waste) and refine user stories only when necessary, just before they are coded.Challenges in Designer-Development Team Communication:- Encourage regular physical presence of both design and development teams in the same location, ideally several days a week, to enhance direct and osmotic communication.- Focus on describing the 'why' rather than the 'how'. Leave technical specifications to the development team. For instance, when designing a database model, you might create the Conceptual Data Model, but ensure the team knows it's not the Physical Data Model. Discovery of Numerous Production Bugs. Incomplete Acceptance Testing:- Develop acceptance tests simultaneously with the user stories and in collaboration with future testers.- Conduct tests in a professional and traceable manner, involving trained testers who use appropriate tools.- Test not only the 'happy paths' but also as many alternative paths as possible.Lack of Automation:- Implement automated tests, especially unit tests, and equally important, executable specifications (Behavioral Driven Development) derived from the acceptance tests outlined in the user stories. Explore tools like Spock. Conclusion By avoiding these common pitfalls, you can significantly increase the chances of a successful agile project. Remember, effective collaboration, clear communication, and a user-centric mindset are key to delivering valuable outcomes. A Product Owner (PO) is a role, not merely a job. It necessitates training, support, and a readiness to continuously challenge our assumptions. It's worth noting that a project can fail even with good design when blueprints and good coding practices are not followed, but this is an entirely different topic. However, due to the GIGO effect, no good product can ever be released from a bad design phase.
Working more than 15 years in IT, I rarely met programmers who enjoy writing tests and only a few people who use something like TDD. Is this really such an uninteresting part of the software development process? In this article, I’d like to share my experience of using TDD. In most of the teams I worked with, programmers wrote code. Often, they didn't write tests at all or added them symbolically. The only mention of the TDD abbreviation made programmers panic. The main reason is that many people misunderstand the meaning of TDD and try to avoid writing tests. It is generally assumed that TDD are usual tests but written before implementation. But this is not quite true. TDD is a culture of writing code. This approach implies a certain order of solving a task and a specific way of thinking. TDD implies solving a task using loops or iterations. Formally, the cycle consists of three phases: Writing a test that gives something to the input and checks the output. In this case, the test doesn’t pass. Writing the simplest implementation with which the test passes. Refactoring. Changing the code without changing the test. The cycle repeats itself until the problem is solved. TDD Cycle I use a slightly different algorithm. In my approach, refactoring is most often the cycle. That is, I write the test, and then I write the code. Next, I write the test again and write the code because refactoring still often requires editing to the test (various mocks, generation of instances, links to existing modules, etc.), but not always. The general algorithm of what we will do I think I won’t describe it as it is done in textbooks. I'll just show you an example of how it works for me. Example Imagine we got a task. No matter how it is described, we can clarify it ourselves, coordinate it with the customer, and solve it. Let’s suppose that the task is described something like this: "Add an endpoint that returns the current time and user information (id, first name, last name, phone number). Also, it is necessary to sign this information based on a secret key." I will not complicate the task to demonstrate it. But in real life, you may need to make a full-fledged digital signature and supplement it with encryption, and this endpoint needs to be added to an existing project. For academic purposes, we will have to create it from scratch. Let's do it using FastAPI. Most programmers just start working on this task without detailed study. They keep everything they can in their head. After all, such a task does not need to be divided into subtasks since it is quite simple and quickly implemented. While working on it, they clarify the requirements of stakeholders and ask questions. And at the end, they write tests anxiously. But we will do it differently. It may seem unexpected, but let's take something from the Agile methodology. Firstly, this task can be divided into logically completed subtasks. You can immediately clarify all the requirements. Secondly, it can be done iteratively, having something working at each step (even incorrectly) that can be demonstrated. Planning Let’s start with the following partition. The First Subtask Make an empty FastAPI app work with one method. Acceptance Criteria There is a FastAPI app, and it can be launched. The GET request "/my-info" returns a response with code 200 and body {} - empty json. The Second Subtask Add a model/User object. At this stage, it will just be a pedantic scheme for the response. You will have to agree with the business on the name of the fields and whether it is necessary to somehow convert the values (filter, clean, or something else). Acceptance Criteria The GET request "/my-info" returns a response with code 200 and body {"user":{"id":1,"firstname":"John","lastname":"Smith","phone":"+995000000000"}. The Third Subtask Add the current time to the response. Again, we need to agree on the time format and the name of the field. Acceptance Criteria The GET request "/my-info" returns a response with code 200 and body {"user":{added earlier},"timestamp":1691377675}. The Fourth Subtask Add a signature. Immediately, some questions to the business appear: Where to add? How to form it? Where to get the key? Where to store? Who has access? And so on… As a result, we use a simple algorithm: We get base64 from the JSON response body. We concatenate with the private key. First, we use an empty string as a key. Then, we take md5 from the received string. We add the result to the X-Signature header. Acceptance Criteria The GET request "/my-info" returns a response described earlier without changes, but with an additional header: "X-Signature":"638e4c9e30b157cc56fadc9296af813a" For this step, the X-Signature is calculated manually. Base64 = eyJ1c2VyIjp7ImlkIjoxLCJmaXJzdG5hbWUiOiJKb2huIiwibGFzdG5hbWUiOiJTbWl0aCIsInBob25lIjoiKzk5NTAwMDAwMDAwMCJ9LCJ0aW1lc3RhbXAiOjE2OTEzNzc2NzV9. Note that the endpoint returns hard-coded values. To what level tasks should be split is up to you. This is just an example. The most important thing will be described further. These four subtasks result in the endpoint that always returns the same response. But there is a question: why have we described the stub in such detail? Here is the reason: these subtasks don’t have to be physically present. They are just steps. They are needed to use the TDD practice. However, their presence on any storage medium other than our memory will make our work much easier. So, let’s begin. Implementation The First Subtask We add the main.py file to the app directory. Python from fastapi import FastAPI app = FastAPI() @app.get("/my-info") async def my_info(): return {} Right after that, we add one test. For example, to the same directory: test_main.py. Python from fastapi.testclient import TestClient from .main import app client = TestClient(app) def test_my_info_success(): response = client.get("/my-info") assert response.status_code == 200 assert response.json() == {} As a result of the first subtask, we added just a few lines of code and a test. At the very beginning, a simple test appeared. It does not cover business requirements at all. It checks only one case — one step. Obviously, writing such a test does not cause much negativity. And at the same time, we have a working code that can be demonstrated. The Second Subtask We add JSON to the verification. To do this, replace the last line in the test. Python result = { "user": { "id": 1, "firstname": "John", "lastname": "Smith", "phone": "+995000000000", }, } assert response.json() == result ❌ Now, the test fails. We change the code so that the test passes. We add the schema file. Python from pydantic import BaseModel class User(BaseModel): id: int firstname: str lastname: str phone: str class MyInfoResponse(BaseModel): user: User We change the main file. We add import. Python from .scheme import MyInfoResponse, User We change the router function. Python @app.get("/my-info", response_model=MyInfoResponse) async def my_info(): my_info_response = MyInfoResponse( user=User( id=1, firstname="John", lastname="Smith", phone="+995000000000", ), ) return my_info_response ✅ Now, the test passes. And we got a working code again. The Third Subtask We add "timestamp": 1691377675 to the test. Python result = { "user": { "id": 1, "firstname": "John", "lastname": "Smith", "phone": "+995000000000", }, "timestamp": 1691377675, } ❌ The test fails again. We change the code so that the test passes. To do this, we add timestamp to the scheme. Python class MyInfoResponse(BaseModel): user: User timestamp: int We add its initialization to the main file. Python my_info_response = MyInfoResponse( user=User( id=1, firstname="John", lastname="Smith", phone="+995000000000", ), timestamp=1691377675, ) ✅ The test passes again. The Fourth Subtask We add the "X-Signature" header verification to the test: "54977504fbe6c7aec318722d9fbcaec8". Python assert response.headers.get("X-Signature") == "638e4c9e30b157cc56fadc9296af813a" ❌ The test fails again. We add this header to the application's response. To do this, we add middleware. After all, we will most likely need a signature for other endpoints of the application. But this is just our choice, which in reality can be different so as not to complicate the code. Let's do it to understand this. We add import Request. Python from fastapi import FastAPI, Request And the middleware function. Python @app.middleware("http") async def add_signature_header(request: Request, call_next): response = await call_next(request) response.headers["X-Signature"] = "638e4c9e30b157cc56fadc9296af813a" return response ✅ The test passes again. At this stage, we have received a ready-made working test for the endpoint. Next, we will change the application, converting it from a stub into a fully working code while checking it with just one ready-made test. This step can already be considered as refactoring. But we will do it in exactly the same small steps. The Fifth Subtask Implement signature calculation. The algorithm is described above, as well as the acceptance criteria, but the signature should change depending on the user's data and timestamp. Let's implement it. ✅ The test passes, and we don't do anything to it at this step. That is, we do a full-fledged refactoring. We add the signature.py file. It contains the following code: Python import base64 import hashlib def generate_signature(data: bytes) -> str: m = hashlib.md5() b64data = base64.b64encode(data) m.update(b64data + b"") return m.hexdigest() We change main.py. We add import. Python from fastapi import FastAPI, Request, Response from .signature import generate_signature We change middleware. Python @app.middleware("http") async def add_signature_header(request: Request, call_next): response = await call_next(request) body = b"" async for chunk in response.body_iterator: body += chunk response.headers["X-Signature"] = generate_signature(body) return Response( content=body, status_code=response.status_code, headers=dict(response.headers), media_type=response.media_type, ) Here is the result of our complication, which wasn’t necessary for us to do. We did not get the best solution since we have to calculate the entire body of the response and form our own Response. But it is quite suitable for our purposes. ✅ The test still passes. The Sixth Subtask Replace timestamp with the actual value of the current time. Acceptance Criteria timestamp in the response returns the actual current time value. The signature is generated correctly. To generate the time, we will use int(time.time()) First, we edit the test. Now, we have to freeze the current time. Import: Python from datetime import datetime from freezegun import freeze_time We make the test look like the one below. Since freezegun accepts either an object or a string with a date, but not unix timestamp, it will have to be converted. Python def test_my_info_success(): initial_datetime = 1691377675 with freeze_time(datetime.utcfromtimestamp(initial_datetime)): response = client.get("/my-info") assert response.status_code == 200 result = { "user": { "id": 1, "firstname": "John", "lastname": "Smith", "phone": "+995000000000", }, "timestamp": initial_datetime, } assert response.json() == result assert response.headers.get("X-Signature") == "638e4c9e30b157cc56fadc9296af813a" Nothing has changed. ✅ That’s why the test still passes. So, we continue refactoring. Changes to the main.py code. Import: Python import time In the response, we replace the time-hard code with a method call. Python timestamp=int(time.time()), ✅ We launch the test — it works. In tests, one often tries to dynamically generate input data and write duplicate functions to calculate the results. I don’t share the idea of this approach as it can potentially contain errors and requires testing as well. The simplest and most reliable way is to input and output data prepared in advance. The only thing that can be used at the same time is configuration data, settings, and some proven fixtures. Now, we will add the settings. The Seventh Subtask Add a private key. We will take it from the settings environment variables. Acceptance Criteria There is a private key (not an empty string). It is part of the signature generation process according to the algorithm described above. The application gets it from the environment variables. For the test, we use the private key: 6hsjkJnsd)s-_=2$%723 As a result, our signature will change to: 479bb02f0f5f1249760573846de2dbc1 We replace the signature verification in the test: Python assert response.headers.get("X-Signature") == "479bb02f0f5f1249760573846de2dbc1" ❌ Now, the test fails. We add file settings.py to get the settings from environment variables. Python from pydantic_settings import BaseSettings class Settings(BaseSettings): security_key: bytes settings = Settings() We add the code for using this key to signature.py. Import: Python from .settings import settings And we replace the string with concatenation with: Python m.update(b64data + settings.security_key) ❌ Now, the test fails. Now, before running the tests, we need to set the environment variable with the correct key. This can be done right before the call, for example, like this: export security_key='6hsjkJnsd)s-_=2$%723' ✅ Now, the test passes. I would not recommend setting the default value in the settings.py file. The variable must be defined. Setting a default incorrect value can lead to hiding an error in production if the value of this variable is not set during deployment. The application will start without errors, but it will give incorrect results. However, in some cases, a working application with incorrect functionality is better than error 503. It's up to you as a developer to decide. The next steps may be replacing the stub of the User object with real values from the database writing additional validation tests and negative scenarios. In any case, you will have to add more acceptance tests at the end. The most important thing here is dividing the task into micro-tasks, writing a simple test for each subtask, and then writing the application code, and after that, if necessary, refactoring. This order in development really helps: Focus on the problem See the result of the subtask clearly Be able to quickly verify the written code Reduce negativity when writing tests Always have at least one test per task As a result, the number of situations when a programmer "overdoes it" and spends much more time solving a problem than he could with a structured approach decreases. Thus, the development time of the feature is reduced, and the quality of the code is improved. In the long term, changes, refactoring, and updates of package versions in the code are easily controlled and implemented with minimal losses. And here is what’s important: TDD should improve development, make it faster, and strengthen it. This is what the word Driven in the abbreviation means. Therefore, it is not necessary to try to write a complete test or acceptance test of the entire task before the start of development. An iterative approach is needed. Tests are only needed to verify the next small step in development. TDD helps answer the question: how do I know that I have achieved my goal (I mean, that the code fragment I wrote works)? The examples can be found here.
Agile software development has transformed how software is created and delivered. It fosters collaboration, flexibility, and quick development cycles, making it appealing to many teams. However, Agile's numerous advantages come with specific cybersecurity risks that developers must address. In this post, we'll delve into the primary cybersecurity threats in Agile development and strategies to mitigate them. Collaboration in Agile encourages diverse skill sets, which can lead to varying levels of security knowledge among team members. Therefore, developers might prioritize functionality over security, potentially exposing vulnerabilities. To combat this, continuous learning and security training are essential. Agile's rapid development cycles can inadvertently introduce security flaws, given the emphasis on speed. To address this, integrate security testing into the Agile process and conduct regular security reviews. Neglecting threat modeling and documentation can leave security gaps. Incorporate threat modeling workshops and maintain minimal yet critical security documentation. Third-party dependencies and insider threats are also concerns in Agile. Regularly assess dependencies and implement least privilege access controls to mitigate these risks. In today's rapidly evolving digital landscape, reducing cybersecurity risk is a paramount concern, especially within the Agile software development paradigm. Insufficient Security Knowledge Insufficient security knowledge within Agile teams, comprised of developers, testers, and product owners, can result in a lack of understanding of security principles. This diversity fosters creativity and speed but might lead to prioritizing functionality over security, potentially exposing code vulnerabilities. To mitigate this issue, encourage continuous learning within the Agile team. Provide access to security training and resources. Consider involving security experts or consultants to conduct regular security assessments. This proactive approach enhances security awareness and ensures that security concerns are not overlooked during development. By taking these steps, Agile teams can strike a balance between innovation and security, fostering a more robust and secure software development process. Rapid Development Cycles Rapid development cycles in Agile prioritize speed and continuous deployment. This accelerates time-to-market, but it also heightens the risk of security flaws. Therefore, it's crucial to integrate security testing into the Agile process. By doing so, you can identify vulnerabilities early. Automated security scanning during continuous integration is one effective method. Regular security reviews and penetration testing throughout the development lifecycle are also essential. These practices help ensure that security concerns aren't neglected amidst the rush to meet deadlines. They allow developers to catch and address security issues before they become major problems. So, while speed is crucial, it should not come at the expense of security. Inadequate Threat Modeling Inadequate Threat Modeling can lead to overlooked security issues. Agile development often skimps on this vital step, potentially jeopardizing the project. Without proper threat modeling, teams may fail to identify critical security threats and vulnerabilities early in development. To address this concern, teams should integrate threat modeling workshops into the Agile sprint planning process. During these workshops, developers can collectively brainstorm potential security risks. This step ensures that security is at the forefront of their minds and encourages them to implement necessary security controls. By following this practice, Agile teams can bridge the gap between speed and security, ensuring that their software remains resilient against potential threats. So, it's crucial to prioritize threat modeling because overlooking it can lead to security breaches down the line. Lack of Documentation The Agile approach prioritizes functional software, which can hinder security teams reliant on detailed documentation. However, achieving a balance is crucial. So, it's essential to maintain essential security documentation while embracing Agile's efficiency. Because without documentation, understanding system architecture and identifying vulnerabilities becomes challenging. Therefore, consider recording security-related decisions, threats, and mitigation strategies. This practice informs the entire team and bridges the gap between Agile's speed and security concerns. Third-Party Dependencies Third-party dependencies are common in Agile development as they accelerate progress. However, these components might harbor vulnerabilities that attackers can exploit. So, what can you do to minimize these risks? Firstly, it's crucial to regularly evaluate these third-party elements for known vulnerabilities. You can utilize tools like the National Vulnerability Database (NVD) for this purpose. Secondly, make it a priority to keep these dependencies up to date. Outdated components are often more susceptible to attacks. Lastly, have a well-defined process in place for promptly patching or replacing any vulnerable components that are discovered. This ensures that you're addressing security concerns swiftly and effectively. By following these steps, you'll strengthen the security posture of your Agile development process and reduce the potential risks associated with third-party dependencies. Neglecting Security Testing Neglecting security testing during Agile development can lead to overlooked vulnerabilities. Automated tools, while helpful, may miss certain threats, exposing the application to potential risks. Therefore, it's crucial to integrate manual security testing into Agile processes. Encourage security reviews and code inspections. Conduct comprehensive testing for prevalent security issues such as injection attacks, authentication flaws, and authorization problems. By following these steps, you can ensure that your software is robustly protected against potential threats, even as you pursue faster development cycles. Remember, security should never be sacrificed for speed, and a balanced approach is essential to deliver secure and functional software. Insider Threats Insider threats are a concern in Agile teams, as members have access to code, infrastructure, and sensitive data. These threats, whether intentional or unintentional, pose significant security risks. To mitigate this risk, implement least privilege access controls. Restrict access to sensitive resources, granting permissions only as necessary. Foster a culture of security awareness among team members. Encourage them to stay vigilant and report any suspicious activity promptly. By taking these measures, Agile teams can minimize the chances of insider threats compromising the security of their software. It's essential to strike a balance between collaboration and security to protect valuable assets effectively. Conclusion In conclusion, Agile software development brings benefits and specific cybersecurity risks. Therefore, developers should proactively address these risks. By integrating security practices into the Agile workflow and emphasizing continuous learning, teams can build software that's both functional and secure. However, maintaining vigilance throughout the development process is crucial. Cybersecurity programs play a vital role in safeguarding digital assets and information, ensuring a robust defense against evolving online threats. In the realm of Agile Software Development, efficient collaboration and adaptability are paramount, much like web crawlers navigating the digital landscape for seamless data retrieval and integration. In the fast-paced world of Agile, cybersecurity must never be an afterthought. It should be an integral part of the development process. So, it's essential to strike a balance between agility and security. To do this, teams can rely on threat modeling, security testing, and regular assessments of third-party dependencies. By following these practices, developers can navigate the Agile landscape while safeguarding their software against potential threats. In summary, in Agile development, security is not a "nice-to-have" but a "must-have" to ensure the success and safety of your software.
Product management is a multifaceted discipline that plays a critical role in the success of a product or service. It encompasses the strategic planning, development, and execution of products, focusing on understanding customer needs, market trends, and business objectives. In this ever-evolving field, staying informed about the latest tips, tricks, and best practices is essential for product managers to excel in their roles. However, it’s equally important to be aware of the potential pitfalls and anti-patterns that can derail product initiatives. This guide aims to provide a comprehensive resource for product managers, offering practical insights and strategies for success while highlighting common mistakes to avoid. Whether you’re a seasoned product manager or just starting your journey, this guide will help you navigate the complex landscape of product management with confidence and competence. Transforming Product Strategy Into Backlog Items: Components of a Backlog When it comes to product management, a crucial aspect is effectively translating product strategy into actionable backlog items. The backlog serves as a dynamic repository of tasks that need to be completed to achieve the desired product goals as part of Jira's best practices. Here are the key components involved in transforming product strategy into backlog items: Define Epics/Themes: Begin by identifying and documenting the epics/themes based on the product vision and strategy. These epics/themes should address specific problems and customer needs. Prioritize and Set Timeline: Establish priorities for the identified epics/themes and provide an indicative timeline. This high-level overview will help stakeholders understand the product’s direction. Decompose Epics into Tasks: Break them down into smaller, actionable tasks and user stories for the higher-priority epics. This step prepares the groundwork for the development team to accomplish the objectives. Provide Context: Offer sufficient context for each task and user story, including relevant information such as user personas, market research, and competitive analysis. This context aids the development team in understanding the purpose and scope of each item. Make Estimates: Assign estimates to the tasks and user stories to gauge the effort required for implementation. Utilize techniques such as story points, time-based estimates, or t-shirt sizing. These estimates are valuable for capacity planning and resource allocation. By following this practical approach, product managers can effectively transform their product strategy into well-defined and prioritized backlog items. Additionally, the following practices can further enhance the process: Business Goals Review: Regularly review the business goals and align the backlog items accordingly. This ensures that the product strategy remains aligned with the overall business objectives. Breaking Down Initiatives: Break down large initiatives into smaller, actionable tasks. This allows for incremental development, facilitates better estimation, and enables frequent feedback loops. Value-Based Prioritization: Prioritize backlog items based on the value they deliver to the users and the business. Consider the impact on customer satisfaction, revenue generation, competitive advantage, or strategic alignment when making prioritization decisions. By incorporating these components and practices, product managers can effectively transform their product strategy into well-defined and prioritized backlog items, setting the stage for successful product development and delivery. Remembering the Key Tips Here are practical tips for transforming product strategy into backlog items: Start with Clear Goals: Begin by defining clear and specific product goals aligned with the overall business strategy. Having well-defined objectives will guide the creation of meaningful backlog items. Involve Stakeholders Early: Involve key stakeholders, including customers, users, and internal teams, in the product strategy discussions. Understanding their perspectives and needs will help shape the backlog of items to better address real-world challenges. Break Down Epics: If your product strategy includes high-level epics, break them down into smaller, actionable backlog items. This makes it easier to prioritize and execute tasks in an iterative manner. Use User-Centric Language: Frame backlog items as user stories that focus on what the user needs and the value they will gain. This approach enhances empathy and ensures that development efforts remain customer-centric. Prioritize Ruthlessly: Establish a robust prioritization framework to evaluate backlog items objectively. Consider factors like customer impact, ROI, technical feasibility, and market trends to rank items effectively. Collaborate Across Teams: Foster collaboration between product managers, designers, developers, and QA teams during backlog refinement. A shared understanding ensures that everyone is on the same page and can contribute valuable insights. Keep Backlog Items Small: Aim for smaller, manageable backlog items that can be completed within a single development cycle (e.g., a sprint in Agile). This approach promotes steady progress and allows for course corrections if needed. Validate Assumptions: Regularly validate assumptions made during product strategy formulation. Conduct user research, usability testing, and market analysis to ensure that the backlog items address real user needs. Include Technical and Maintenance Tasks: Balance new feature development with technical debt resolution and maintenance tasks. Allocating time for these items ensures a more sustainable and reliable product. Focus on Minimum Viable Product (MVP): Prioritize essential features and functionalities to create an MVP that can be delivered quickly. This allows for early user feedback and faster validation of the product’s value. Adapt and Learn: Be open to adapting the backlog based on feedback and insights gained during development. Embrace an iterative approach to continuously improve and refine the product. Communicate the Why: Provide context for each backlog item, explaining why it aligns with the product strategy and how it contributes to the overall vision. This understanding helps motivate the team and fosters a sense of purpose. By following these practical tips, product managers can effectively translate product strategy into a well-organized and actionable backlog. This dynamic repository of tasks will enable the team to stay focused, prioritize effectively, and deliver a successful product that meets customer needs and achieves business goals. Prioritization Frameworks When managing a backlog, it’s crucial to have a robust prioritization framework in place. Here are a few commonly used frameworks: MoSCoW: This framework categorizes backlog items as Must-haves, Should-haves, Could-haves, and Won’t-haves. It helps prioritize features based on criticality and urgency. Kano Model: This model classifies backlog items into three categories: Basic Expectations, Performance Enhancers, and Exciters. It helps differentiate between essential features and those that provide additional value or delight to users. Value vs. Effort Matrix: This framework evaluates backlog items based on their value to the users or business against the effort required for implementation. It helps identify high-value, low-effort items for prioritization. Benefits of Prioritization Frameworks Several prioritization frameworks can assist product managers in making informed decisions. Here’s an example: RICE Framework: RICE stands for Reach, Impact, Confidence, and Effort. It helps prioritize backlog items by assessing their potential value and feasibility. Reach: The number of users or customers who will be impacted by the item. Impact: The degree of positive impact on the users or business. Confidence: The level of confidence in estimating the impact and effort accurately. Effort: The level of effort or resources required for implementation. By considering these factors, product managers can prioritize items with a higher potential impact and reach while also considering the effort and confidence levels. The choice of a prioritization framework depends on the project’s specific needs, team dynamics, and stakeholder requirements. It is important to select a framework that aligns with the project goals and allows for flexibility and adaptability as the project progresses. Roadmap and Requirements as a Foundation for Backlog A well-defined roadmap and clear requirements are essential for effective backlog management. The roadmap outlines the strategic vision, goals, and timeline for the product. It serves as a guide for prioritizing and sequencing backlog items. Conversely, requirements provide detailed specifications and user stories that define the desired functionality and outcomes. Effectively Managing a Backlog To manage a backlog efficiently, consider the following practices: Regular Refinement: Schedule regular backlog refinement sessions to review and update the backlog items. This involves clarifying requirements, re-prioritizing, decomposing large items, and removing or archiving obsolete items. Continuous Prioritization: Prioritize backlog items based on changing market dynamics, customer feedback, and business needs. Regularly reassess priorities to ensure the backlog remains aligned with the evolving product strategy. Collaborative Approach: Involve the development team, stakeholders, and users in backlog management. Encourage collaboration and transparency to gather diverse perspectives, gather feedback, and ensure a shared understanding of priorities. Backlog Management Anti-Patterns While managing a backlog, it’s important to be aware of common anti-patterns that can hinder productivity and effectiveness. Some anti-patterns to avoid include: Overcommitting: Overloading the backlog with more items than the team can handle, leading to inefficiency and missed deadlines. Lack of Refinement: Neglecting regular backlog refinement, resulting in unclear requirements, ambiguity, and delays in development. Ignoring Stakeholder Feedback: Disregarding input from stakeholders and users, which can lead to a misaligned backlog that does not address their needs or expectations. Not Adapting to Change: Resisting change and failing to adjust backlog priorities based on new information, market shifts, or emerging opportunities. By implementing effective prioritization frameworks, establishing a solid foundation with a roadmap and requirements, adopting good backlog management practices, and avoiding common anti-patterns, product managers can ensure that their backlogs remain organized, aligned, and optimized for successful product development and delivery. Jira Backlog Refinement Jira is a popular tool used for backlog management in many software development teams. Here are some tips to consider before, during, and after a Jira backlog refinement session: Before the Refinement Session Review Backlog Items: Familiarize yourself with the existing backlog items in Jira, their descriptions, priorities, and any associated attachments or comments. This will help you gain a clear understanding of the current state of the backlog. Gather Stakeholder Input: Reach out to stakeholders, including product owners, development team members, and users, to gather their insights, feedback, and new requirements before the session. This will ensure that their perspectives are considered during the refinement. During the Refinement Session Set Clear Objectives: Clearly communicate the goals and objectives of the refinement session at the beginning. Ensure that all participants have a shared understanding of what needs to be accomplished during the session. Focus on One Item at a Time: Address backlog items one by one to maintain focus and avoid getting overwhelmed. Discuss each item’s requirements, acceptance criteria, and any necessary adjustments or updates. Break Down Large Items: If any backlog items are too large or complex, consider breaking them down into smaller, more manageable tasks. This promotes clarity and allows for more accurate estimation and planning. Pro tip: You can use Smart Checklist for refining your task. Checklist items are a perfect fit when you need to break down tasks and make them more clear and make them more clear and manageable. After the Refinement Session Update Jira Backlog: Make sure to accurately capture and document any changes, updates, or newly created backlog items in Jira. Update item descriptions, acceptance criteria, and priorities as discussed during the session. Communicate Changes: Share the outcomes of the refinement session with relevant team members and stakeholders. Communicate any modifications to backlog items, priorities, or timelines to ensure everyone is on the same page. Schedule Regular Refinement: Plan and schedule regular backlog refinement sessions to keep the backlog up to date and ensure ongoing alignment with the product goals and requirements. By following these tips before, during, and after a Jira backlog refinement session, you can enhance collaboration, improve backlog clarity, and maintain an organized and effective backlog management process. Sprint vs. Product Backlog in Jira In Jira, the sprint backlog and the product backlog serve different purposes: Sprint Backlog: The sprint backlog consists of a set of user stories or tasks that the development team commits to completing within a specific sprint or time-boxed iteration. It is a subset of the product backlog and represents the work sprint planning for that particular sprint. Product Backlog: The product backlog contains a prioritized list of all the desired features, enhancements, and bug fixes for the product. It represents the complete scope of work that needs to be done over multiple sprints. The sprint backlog is a temporary and evolving list specific to a particular sprint, while the product backlog represents the overall product roadmap. Factors Influencing Product Manager’s Prioritization Several factors may influence a product manager’s prioritization decisions, including: Customer Needs: Understanding and addressing the needs and pain points of the target customers or user base is a significant factor in prioritizing backlog items. Business Goals: Aligning the backlog priorities with the strategic objectives and business goals of the organization is crucial for ensuring the product’s success. Market Trends: Keeping an eye on market trends, competition, and emerging technologies can help guide prioritization decisions and ensure the product remains relevant. User Feedback: Listening to user feedback, conducting user research, and considering user satisfaction and engagement data can influence prioritization. Technical Constraints: Consideration of technical dependencies, resource availability, and development capacity can impact the prioritization of backlog items. Common Pitfalls of Backlog Management Some common pitfalls in backlog management include: Treating it as a Wishlist: Including numerous low-priority or vague items without proper evaluation can lead to an overloaded backlog with little strategic focus. Misaligned Objectives: Prioritizing backlog items that do not align with the overall business goals or product strategy can result in wasted effort and resources. Lack of Regular Refinement: Neglecting regular refinement sessions can lead to unclear requirements, stale backlog items, and delays in development. Ignoring Stakeholder Input: Disregarding feedback from stakeholders and users can result in a misaligned product and missed opportunities. Inefficient Prioritization: Failing to employ effective prioritization frameworks or techniques can lead to suboptimal allocation of resources and missed opportunities for high-value features. By avoiding these pitfalls, product managers can maintain a focused and effective backlog that aligns with business goals, satisfies user needs, and maximizes the product’s value. The Takeaway Jira is a popular choice among project managers because it provides the ideal balance of customization, flexibility, and quality-of-life components for teams. Jira priorities are important in many aspects of project management, including planning, resource allocation, and decision-making. If there is something lacking from the app, it is simple to locate on the Atlassian Marketplace!
Agile methodologies have genuinely transformed the landscape of service delivery and tech companies, ushering in a fresh era of adaptability and flexibility that perfectly matches the demands of today's fast-paced business world. The significance of Agile methodologies runs deep, not only streamlining processes but also fostering a culture of ongoing improvement and collaborative spirit. Within the service delivery context, Agile methodologies introduce a dynamic approach that empowers teams to swiftly and effectively address evolving client needs. Unlike conventional linear models, Agile encourages iterative development and constant feedback loops. This iterative nature ensures that services are refined in real time, allowing companies to quickly adjust their strategies based on market trends and customer preferences. In the tech sector, characterized by innovation and rapid technological advancements, Agile methodologies play a pivotal role in keeping companies on the cutting edge. By promoting incremental updates, short development cycles, and a customer-focused mindset, Agile enables tech companies to swiftly incorporate new technologies or features into their products and services, positioning them as frontrunners in a highly competitive industry. Ultimately, Agile methodologies offer a structured yet flexible approach to project management and service delivery, enabling companies to deal with complexities more effectively and quickly adapt to market changes. Understanding Agile Principles and Implementation The list of Agile methodologies encompasses Scrum, Kanban, Extreme Programming (XP), Feature-Driven Development (FDD), Dynamic Systems Development Method (DSDM), Crystal, Adaptive Software Development (ASD), and Lean Development. Irrespective of the specific methodology chosen, each one contributes to enhancing efficiency and effectiveness across the software development journey. Agile methodologies are underpinned by core principles that set them apart from traditional project management approaches. Notably: Emphasis on close client interaction throughout development, ensuring alignment and avoiding miscommunication. Responsive adaptation to changes is integral to Agile, given the ever-evolving nature of markets, requirements, and user feedback. Effective, timely team communication is pivotal for success. They are embracing changes that deviate from the plan as opportunities for product improvement and enhanced interaction. Agile's key distinction from systematic work lies in its ability to combine speed, flexibility, quality, adaptability, and continuous results enhancement. Importantly, it's essential to recognize that the implementation of Agile methodologies can vary across organizations. Each entity can tailor its approach based on its specific requirements, culture, and project nature. It's worth noting that this approach is fluid and can evolve as market dynamics change during the work process. The primary challenge of adopting Agile is initiating the process from scratch and conveying to stakeholders the benefits of an alternative approach. However, the most significant reward is a progressively improving outcome, including enhanced team communication, client trust, reduced risk impact, increased transparency, and openness. Fostering Collaboration and Communication Effective communication serves as the backbone of any successful project. It's imperative to maintain constant synchronization and know whom to approach when challenges arise that aren't easily resolved. Numerous tools facilitate this process, including daily meetings, planning sessions, and task grooming (encompassing all stakeholders involved in tasks). Retrospectives also play a pivotal role, providing a platform to discuss positive aspects of the sprint, address challenges that arose, and collaboratively find solutions. Every company can select the artifacts that align with their needs. Maintaining communication with the client is critical, as the team must be aware of plans and the overall business trajectory. Agile practices foster transparency and real-time feedback, resulting in adaptive and client-centric service delivery: Iterative development ensures the client remains informed about each sprint's outcomes. Demos showcasing completed work to the client offer a gauge of project progress and alignment with expectations. Close interaction and feedback loops with the client are central during development. Agile artifacts — such as daily planning, retrospectives, and grooming, to name a few — facilitate efficient coordination. Continuous integration and testing ensure product stability amid regular code changes. Adapting To Change and Continuous Improvement Change is an undeniable reality in today's ever-evolving business landscape. Agile methodology equips your team with the agility needed to accommodate evolving requirements and shifting client needs in service delivery. Our operational approach at Innovecs involves working in succinct iterations or sprints, consistently delivering incremental value within short timeframes. This methodology empowers teams to respond promptly to changing prerequisites and adjust priorities based on invaluable customer input. Agile not only facilitates the rapid assimilation of new customer requirements and preferences but also nurtures an adaptive and collaborative service delivery approach. The foundation of continuous feedback, iterative development, and a culture centered around learning and enhancement propels Agile teams to maintain their agility, thereby delivering impactful solutions tailored to the demands of today's dynamic business landscape. A cornerstone of Agile methodologies is perpetual advancement. As an organization, we cultivate an environment steeped in learning and iteration, where experimentation with novel techniques and tools becomes an engaging challenge for the team. The satisfaction and enthusiasm arising from successful results further fuel our pursuit of continuous improvement. Measuring Success and Delivering Value Agile methodology places a central focus on delivering substantial value to customers. Consequently, gauging the triumph of service delivery endeavors regarding customer contentment and business outcomes holds the utmost significance. This assessment can take several avenues: Feedback loops and responsiveness: Employing surveys and feedback mechanisms fosters transparency and prompt responses. Above all, the ultimate success of the product amplifies customer satisfaction. Metrics analysis: Evaluating customer satisfaction and business metrics empowers organizations to make informed choices, recalibrate strategies, and perpetually enhance their services to retain their competitive edge in the market. We encountered a specific scenario where Agile methodologies yielded remarkable service delivery enhancements and tangible benefits for our clients. During this instance, my suggestion to introduce two artifacts — task refinement and demos — yielded transformative outcomes. This refinement bolstered planning efficiency and culminated in on-time sprint deliveries. Notably, clients were consistently kept abreast of project progress. In an Agile market characterized by rapid, unceasing changes, preparedness for any scenario is key. Flexibility and unwavering communication are vital to navigating uncertainties. Being adaptable and maintaining open lines of dialogue serves as bedrock principles for achieving exceptional outcomes. When it comes to clients, transparency is paramount. Delivering work that exceeds expectations is a recurring theme. Always aiming to go a step further than anticipated reinforces our commitment to client satisfaction.
Jasper Sprengers
senior developer,
Team Rockstars IT
Alireza Chegini
DevOps Architect / Azure Specialist,
Coding As Creating
Dr. Srijith Sreenivasan
Director,
Schneider Electric
Martin Fowler