Development team management involves a combination of technical leadership, project management, and the ability to grow and nurture a team. These skills have never been more important, especially with the rise of remote work both across industries and around the world. The ability to delegate decision-making is key to team engagement. Review our inventory of tutorials, interviews, and first-hand accounts of improving the team dynamic.
An Agile Coach’s Guide To Storytelling [Video]
An Effective Mechanism for Preliminary Analysis on the Effects of Propensity To Trust in Distribution Software Development
The need for speed, agility, and security is paramount in the rapidly evolving landscape of software development and IT operations. DevOps, focusing on collaboration and automation, has revolutionized the industry. However, in an era where digital threats are becoming increasingly sophisticated, security can no longer be an afterthought. This is where DevSecOps comes into play - a philosophy and practices that seamlessly blend security into the DevOps workflow. This extensive guide will delve deep into the principles, benefits, challenges, real-world use cases, and best practices of DevSecOps. Understanding DevSecOps What Is DevSecOps? DevSecOps is an extension of DevOps, where "Sec" stands for security. It's a holistic approach integrating security practices into software development and deployment. Unlike traditional methods where security was a standalone phase, DevSecOps ensures that security is embedded throughout the entire Software Development Life Cycle (SDLC). The primary goal is to make security an enabler, not a bottleneck, in the development and deployment pipeline. The Importance of a DevSecOps Culture DevSecOps is not just about tools and practices; it's also about fostering a culture of security awareness and collaboration. Building a DevSecOps culture within your organization is crucial for long-term success. Here's why it matters: Security Ownership: In a DevSecOps culture, everyone is responsible for security. Developers, operators, and security professionals share ownership, which leads to proactive security measures. Rapid Detection and Response: A culture that values security ensures potential issues are spotted early, allowing for swift responses to security threats. Continuous Learning: Embracing a DevSecOps culture encourages continuous learning and skill development. Team members are motivated to stay updated on security practices and threats. Collaboration and Communication: When teams work together closely and communicate effectively, security vulnerabilities are less likely to slip through the cracks. Future Trends in DevSecOps As technology evolves, so does the DevSecOps landscape. Here are some emerging trends to watch for in the world of DevSecOps: Shift-Right Security: While "Shift Left" focuses on catching vulnerabilities early, "Shift Right" emphasizes security in production. This trend involves real-time monitoring and securing applications and infrastructure, focusing on runtime protection. Infrastructure as Code (IaC) Security: As organizations embrace IaC for provisioning and managing infrastructure, securing IaC templates and configurations becomes vital. Expect to see increased emphasis on IaC security practices. Cloud-Native Security: With the growing adoption of cloud-native technologies like containers and serverless computing, security in the cloud is paramount. Cloud-native security tools and practices will continue to evolve. AI and Machine Learning in Security: AI and machine learning are applied to security operations for threat detection, anomaly identification, and automated incident response. These technologies will play an increasingly prominent role in DevSecOps. Compliance as Code: Automating compliance checks and incorporating compliance as code into the DevSecOps pipeline will help organizations meet regulatory requirements more efficiently. The Role of Security in DevOps Historically, security was often treated as a separate silo, addressed late in the development process or even post-deployment. However, this approach needs to be revised in today's threat landscape, where vulnerabilities and breaches can be catastrophic. Security must keep pace in the DevOps environment, characterized by rapid changes and continuous delivery. Neglecting security can lead to significant risks, including data breaches, compliance violations, damage to reputation, and financial losses. Benefits of DevSecOps Integrating security into DevOps practices offers numerous advantages: Reduced Vulnerabilities: By identifying and addressing security issues early in development, vulnerabilities are less likely to make it to production. Enhanced Compliance: DevSecOps facilitates compliance with regulatory requirements by integrating security checks into the SDLC. Improved Customer Trust: Robust security measures instill confidence in customers and users, strengthening trust in your products and services. Faster Incident Response: DevSecOps equips organizations to detect and respond to security incidents more swiftly, minimizing potential damage. Cost Savings: Identifying and mitigating security issues early is often more cost-effective than addressing them post-deployment Key Principles of DevSecOps DevSecOps is guided by core principles that underpin its philosophy and approach: Shift Left: Shift left" means moving security practices and testing as early as possible in the SDLC. This ensures that security is a fundamental consideration from the project's inception. Automation: Automation is a cornerstone of DevSecOps. Security checks, tests, and scans should be automated to detect issues consistently and rapidly. Continuous Monitoring: Continuous monitoring of applications and infrastructure in production helps identify and respond to emerging threats and vulnerabilities. Collaboration: DevSecOps promotes collaboration between development, operations, and security teams. Everyone shares responsibility for security, fostering a collective sense of ownership. Implementing Security Early in the SDLC To effectively integrate security into your DevOps workflow, you must ensure that security practices are ingrained in every stage of the Software Development Life Cycle. Here's how you can achieve this: Planning and Design: Begin with security considerations during the initial planning and design phase. Identify potential threats and define security requirements. Code Development: Developers should follow secure coding practices, including input validation, authentication, and authorization controls. Static code analysis tools can help identify vulnerabilities at this stage. Continuous Integration (CI): Implement automated security testing as part of your CI pipeline. This includes dynamic code analysis and vulnerability scanning. Continuous Deployment (CD): Security should be an integral part of the CD pipeline, with automated security testing to validate the security of the deployment package. Monitoring and Incident Response: Continuous monitoring of production systems allows for detecting security incidents and rapid responses. Tech enthusiast driving innovation for a brighter future. Passionate about positive change. Security Tools and Technologies Effective DevSecOps implementation relies on a range of security tools and technologies. These tools automate security testing, vulnerability scanning, and threat detection. Here are some key categories: Static Application Security Testing (SAST): SAST tools analyze source code, bytecode, or binary code to identify vulnerabilities without executing the application. Dynamic Application Security Testing (DAST): DAST tools assess running applications by sending requests and analyzing responses to identify vulnerabilities. Interactive Application Security Testing (IAST): IAST tools combine elements of SAST and DAST by analyzing code as it executes in a live environment. Container Security: Containerization introduces its security challenges. Container security tools scan container images for vulnerabilities and enforce runtime security policies. Vulnerability Scanning: Vulnerability scanning tools assess your infrastructure and applications for known vulnerabilities, helping you prioritize remediation efforts. Security Information and Event Management (SIEM): SIEM tools collect and analyze security-related data to identify and respond to security incidents. When integrated into your DevSecOps pipeline, these tools provide comprehensive security testing and monitoring coverage. Collaboration Between DevOps and Security Teams Effective collaboration between development, operations, and security teams is essential for DevSecOps success. Here are some strategies to foster this collaboration: Establish Clear Communication Channels: Ensure that teams have clear channels for communication, whether through regular meetings, shared chat platforms, or documentation. Cross-Training: Encourage team members to cross-train in each other's areas of expertise. Developers should understand security principles, and experts should grasp development and operational concerns. Shared Responsibility: Emphasize shared responsibility for security. Encourage a culture where everyone considers security part of their role. Joint Ownership: Consider forming cross-functional teams with members from different departments to own and operate security-related projects jointly. Real-world Use Cases To illustrate the impact of DevSecOps in practice, let's examine a couple of real-world examples: Case Study: Company X Company X, a financial services provider, implemented DevSecOps to enhance the security of its online banking application. By integrating security checks into their CI/CD pipeline and implementing continuous monitoring, they achieved: A 60% reduction in security vulnerabilities. Improved compliance with industry regulations. A 40% decrease in the mean time to detect and respond to security incidents. Case Study: Healthcare Provider Y Healthcare Provider Y adopted DevSecOps to protect patient data in their electronic health record system. By automating vulnerability scanning and improving collaboration between their development and security teams, they achieved the following: Zero security breaches in the past year. Streamlined compliance with healthcare data security regulations. Improved trust and confidence among patients. These case studies highlight the tangible benefits that organizations can realize by embracing DevSecOps. Challenges and Solutions While DevSecOps offers numerous benefits, it has its challenges. Here are some common challenges and strategies to address them: Resistance to Change: Solution: Foster a culture of continuous improvement and provide training and resources to help team members adapt to new security practices. Tool Integration:Solution: Choose tools that integrate seamlessly with your existing DevOps pipeline and automate the integration process. Complexity:Solution: Start small and gradually expand your DevSecOps practices, first focusing on the most critical security concerns. Compliance Hurdles:Solution: Work closely with compliance experts to ensure your DevSecOps practices align with regulatory requirements. Measuring DevSecOps Success Measuring the success of your DevSecOps practices is essential to ongoing improvement. Here are some key performance indicators (KPIs) and metrics to consider: Number of Vulnerabilities Detected: Measure how many vulnerabilities are detected and remediated. Mean Time to Remediate (MTTR): Track how quickly your team can address and resolve security vulnerabilities. Frequency of Security Scans: Monitor how often security scans and tests are performed as part of your pipeline. Incident Response Time: Measure the time it takes to respond to and mitigate security incidents. Compliance Adherence: Ensure your DevSecOps practices align with industry regulations and standards. Getting Started With DevSecOps If you're new to DevSecOps, here are some steps to get started: Assess Your Current State: Evaluate your existing DevOps practices and identify areas where security can be integrated. Define Security Requirements: Determine your organization's security requirements and regulatory obligations. Choose Appropriate Tools: Select security tools that align with your goals and seamlessly integrate with your existing pipeline. Educate Your Team: Provide training and resources to help your team members acquire the necessary skills and knowledge. Start Small: Initiate a pilot project to test your DevSecOps practices before scaling up. Continuously Improve: Embrace a culture of continuous improvement, conducting regular reviews and optimizations of your DevSecOps practices. Conclusion In today's digital landscape, security is not an option; it's a necessity. DevSecOps is the answer to the growing challenges of securing software in a fast-paced development and deployment environment. By integrating security into every phase of the DevOps pipeline, organizations can reduce vulnerabilities, enhance compliance, build customer trust, and respond more effectively to security incidents. Whether you're just starting your DevSecOps journey or looking to refine existing practices, the principles and strategies outlined in this guide will set you on the path to a more secure and resilient software development process. As threats continue to evolve, embracing DevSecOps is not just a best practice; it's a critical imperative for the future of software development and IT operations.
In today’s digital landscape, cloud computing platforms have become essential for businesses seeking scalable, reliable, and secure solutions. Microsoft Azure, a leading cloud provider, offers a wide range of services and resources to meet the diverse needs of organizations. In this blog post, we will delve into Azure project management, highlighting the significant tasks carried out to ensure efficient operations and successful deployment during your software product development journey. Azure Project Management: Infrastructure and Services Resource Setup To kickstart the project, several key resources were provisioned on Microsoft Azure. App Services were established for both frontend and backend components, enabling the seamless delivery of web applications. MySQL databases were implemented to support data storage and retrieval for both the front end and back end. Additionally, Service Buses and Blob Storages were configured to facilitate efficient messaging and file storage, respectively. Bitbucket Pipelines for Automated Deployment To streamline the deployment process in Azure DevOps project management, Bitbucket Pipelines were implemented. These pipelines automate the deployment workflow, ensuring consistent and error-free releases. With automated deployments, developers can focus more on building and testing their code while the deployment process itself is handled seamlessly by the pipelines. Autoscaling for App Services To optimize resource allocation and ensure optimal performance, autoscaling was configured for all the App Services. This dynamic scaling capability automatically adjusts the number of instances based on predefined metrics such as CPU utilization or request count. By scaling resources up or down as needed, the project can handle varying workloads efficiently, maintaining responsiveness and cost-effectiveness. Azure Kubernetes Cluster for AI API To leverage the power of containerization and orchestration, the AI API component of the project was moved to an Azure Kubernetes Cluster (AKS). Kubernetes provides a scalable and resilient environment for running containerized applications, allowing for easy management and deployment of the AI API. This migration enhances flexibility, scalability, and fault tolerance in Azure project management, enabling seamless integration with other project components. Migration to Azure Service Bus In a bid to enhance messaging capabilities, the existing RabbitMQ infrastructure was migrated to Azure Service Bus. Azure Service Bus provides a reliable and scalable messaging platform, ensuring seamless communication between different components of the project. The migration offers improved performance, higher scalability, and better integration with other Azure services. Deprecation Updates and Function Creation As technology evolves, it is crucial to keep the project’s infrastructure up to date. Depreciated services such as storage accounts and MySQL were updated to their latest versions, ensuring compatibility and security. Additionally, functions were created for webhooks and scheduled scripts, enabling efficient automation of routine tasks and enhancing the project’s overall efficiency. Monitoring in Azure Project Management Alert Configuration Proactive monitoring is crucial to identify and address any issues promptly. Alerts were set up on all the project’s resources, including App Services, MySQL Databases, Service Buses, and Blob Storage. These alerts help the Azure project management team stay informed about potential performance bottlenecks, security breaches, or other critical events, allowing them to take immediate action and minimize downtime. Monitoring With Elastic Logstack Kibana (ELK) To gain valuable insights into the project’s operational and log data, a monitoring system was set up using Elastic Logstack Kibana (ELK). ELK enables centralized log management, real-time log analysis, and visualization of logs, providing developers and system administrators with a comprehensive view of the project’s health and performance. This monitoring setup aids in identifying and resolving issues quickly, leading to improved system reliability. Security Aspects of Azure Project Management Security Measures Maintaining robust security is paramount for any project hosted on a cloud platform. Various security measures were implemented, including but not limited to network security groups, identity and access management policies, and encryption mechanisms. These measures help protect sensitive data, prevent unauthorized access, and ensure compliance with industry-specific regulations. Manual Deployment for Production Environment While automated deployments offer significant advantages, it is essential to exercise caution in the production environment. To ensure precise control and reduce the risk of unintended consequences, manual deployment was implemented for the project’s production environment. Manual deployments enable thorough testing, verification, and approvals before releasing changes to the live environment, ensuring a stable and reliable user experience. Zero Trust Infrastructure Implementation Given the increasing complexity of cybersecurity threats, a zero-trust infrastructure approach was adopted for the Azure DevOps project management. This security model treats every access attempt as potentially unauthorized, requiring stringent identity verification and access controls. By implementing zero trust principles, the project minimizes the risk of data breaches and unauthorized access, bolstering its overall security posture. Optimizing Cost and Enhancing Efficiency While Microsoft Azure offers a comprehensive suite of services, it’s essential to ensure cost optimization to maximize the benefits of cloud computing. Here, we will explore the actions taken to reduce billing usage in Microsoft Azure project management. By implementing these strategies, the project team can optimize resource allocation, eliminate unnecessary expenses, and achieve significant cost savings. Backend Scaling Configuration Optimization One of the key areas for cost reduction is optimizing the backend scaling configuration. By carefully analyzing the project’s workload patterns and performance requirements, the scaling configuration was adjusted to align with actual demand. This ensures that the project provisions resources based on workload needs, avoiding overprovisioning and unnecessary costs. Fine-tuning the backend scaling configuration helps strike a balance between performance and cost-effectiveness. Scheduler for Container Apps and Environment Optimization Containerized applications are known for their agility and resource efficiency. To further enhance cost optimization, a scheduler was implemented for container apps. This scheduler automatically starts and stops container instances based on predefined schedules or triggers, eliminating the need for 24/7 availability when not required. Additionally, unnecessary environments that were initially provisioned due to core exhaustion were removed, consolidating the project’s resources into a single optimized environment. Function API for Container Management To provide developers with control over container instances, a Function API was created. This API allows developers to start and stop containers as needed, enabling them to manage resources efficiently. By implementing this granular control mechanism, the project ensures that resources are only active when necessary, reducing unnecessary costs associated with idle containers. Front Door Configuration Improvement Front Door, a powerful Azure service for global load balancing and traffic management, was optimized to avoid unnecessary requests to project resources. By fine-tuning the configuration, the Azure project team reduced the number of requests that reached the backend, minimizing resource consumption and subsequently lowering costs. This optimization ensures that only essential traffic is directed to the project’s resources, eliminating wastage and enhancing efficiency. Removal of Unwanted Resources Over time, projects may accumulate unused or redundant resources, leading to unnecessary billing costs. A thorough audit of the Azure environment was conducted as part of the cost reduction strategy, and unwanted resources were identified and removed. By cleaning up the Azure environment, the project team eliminates unnecessary expenses and optimizes resource allocation, resulting in significant cost savings. Conclusion Successfully managing a project on Microsoft Azure requires careful planning, implementation, and ongoing optimization. By leveraging the robust features and capabilities of Microsoft Azure, the project team can ensure a secure, scalable, and reliable solution, ultimately delivering a seamless user experience. Moreover, cost optimization is a critical aspect of managing projects on Microsoft Azure. By implementing specific strategies to reduce billing usage, such as optimizing backend scaling configurations, implementing schedulers, leveraging Function APIs for resource management, improving front door configurations, and removing unwanted resources, the project team can achieve substantial cost savings while maintaining optimal performance. With continuous monitoring and optimizing costs, organizations can ensure that their Azure projects are efficient, cost-effective, and aligned with their budgetary requirements.
DevOps is a game-changing approach to software development. It combines agility, speed, and quality to revolutionize how we create and deploy software. By breaking down barriers between development and operations, DevOps fosters collaboration and enables faster and more reliable software releases. But it's important to remember that DevOps is not just about tools and methods — it's a cultural shift that brings teams together for success. This cultural change involves breaking down existing barriers, enhancing communication, and encouraging continuous improvement. In this post, I want to talk about building a DevOps culture layer by layer. Especially to uncover the potential challenges you may encounter in implementing it in your business and the strategies to overcome these obstacles. This journey is not without difficulties, but the benefits of a successful DevOps culture—improved efficiency, faster delivery times, and high-quality products — are well worth the effort. The Core Principles of DevOps Culture A successful DevOps culture leans on several core principles: Collaboration and communication: This fundamental tenet focuses on breaking down silos and fostering an environment of transparency and mutual respect among the developers, operations team, and other stakeholders. Automation: By automating repetitive tasks, organizations can increase efficiency, reduce errors, and allow their teams to focus on more strategic, higher-value work. Continuous improvement: A cornerstone of DevOps is the adoption of iterative processes, constantly seeking to improve, innovate, and refine practices and procedures. Customer-centric approach: DevOps culture emphasizes the importance of delivering value to the customer quickly and reliably, using customer feedback to guide development and operations. Embracing failure: In a DevOps culture, failures are seen as opportunities to learn and innovate, not as setbacks. Teams are encouraged to take calculated risks, knowing they have the support to learn and grow from the outcomes. Uniting teams and expertise: A successful DevOps culture promotes the sharing of knowledge and expertise across teams, fostering a sense of collective ownership and shared responsibility. By understanding and implementing these core principles, organizations can lay the foundation for a thriving DevOps culture. Let's explore the process of building this culture layer by layer. The Process: Building a DevOps Culture Layer-by-Layer To successfully build a DevOps culture, it's essential to follow a systematic approach that involves several key steps: Setting the stage: Before committing to DevOps, it's crucial to understand why it is necessary for your organization and what outcomes you hope to achieve. This step includes communicating the benefits of DevOps to all stakeholders and getting buy-in from upper management. Setting goals: Meaningful, achievable goals are essential for progress. These targets can include reducing lead times, increasing deployment frequency, or improving software quality. By setting clear goals, teams can measure progress and stay motivated. Leading the charge: A technical leader must spearhead the DevOps transformation, acting as a role model for their team and guiding them through the changes. Pilot testing: It's best to start small when implementing DevOps. Choose a non-critical project that can serve as a pilot test to identify areas for improvement and refine your processes before scaling up. Manage or remove silos: The core of DevOps is communication. As a result, you can't afford to have collaboration and communication issues if you want your DevOps efforts to work. Try to break down silos by creating cross-functional teams and encourage communication and collaboration between them. By doing so, your teams can share knowledge and expertise, fostering a sense of collective ownership. Reorganizing team duties and incentives: Traditional roles may need to be redefined in a DevOps culture. For example, developers may need to take on some operations tasks, while operations team members may need to learn coding skills. Incentives and performance metrics should also align with the goals of DevOps. Conflict resolution: With any change, conflicts are bound to arise. It's essential to have a process in place for mediating disputes and promoting open communication among team members. Foster a collaborative environment: A collaborative environment is vital for a successful DevOps culture. Teams must feel comfortable sharing ideas and providing constructive feedback. Encourage End-to-End Responsibility: In a DevOps culture, everyone is responsible for the entire software development lifecycle. This mindset promotes accountability and encourages teams to think about the big picture. By following these steps, organizations can gradually build a strong DevOps culture that supports continuous improvement and drives success. Challenges and Strategies for Overcoming Them Building a DevOps culture is not without its challenges. Here are some common obstacles organizations may face and strategies to overcome them: Resistance to Change Adopting a DevOps culture requires a significant shift in mindset and practices, which can be met with resistance from team members who are used to traditional development and operations methods. To overcome this, it's crucial to have open communication and involve all team members in the process. Lack of Automation Without proper automation, DevOps practices can't be fully realized. It's essential to invest in tools that automate tasks and processes to increase efficiency and reduce errors. Insufficient Collaboration A DevOps culture relies heavily on collaboration and communication. If team members are not willing to share knowledge and work together, it can impede the success of DevOps. Organizations must foster a collaborative environment where all team members feel comfortable working together. Inadequate Leadership Support For DevOps to succeed, it requires support from upper management. If leaders do not fully understand or support the shift towards DevOps, it can hinder progress. To overcome this, it's crucial to educate leaders on the benefits and outcomes of DevOps. Shifting Roles and Responsibilities As mentioned earlier, traditional roles may need to be redefined in a DevOps culture. This shift can cause confusion and conflict among team members. Clear communication and training can help mitigate these challenges. And also proper documentation so that everyone knows what their roles are and where to find the details of their work. Overall, building a DevOps culture requires patience, persistence, and a willingness to learn and adapt. By following these guidelines and strategies, organizations can lay the foundation for a successful DevOps culture layer by layer. Over to You There is no end to improving and refining a DevOps culture; it's an ongoing process that requires constant evaluation and adaptation. With dedication and a commitment to continuous improvement, organizations can reap the benefits of a thriving DevOps culture that drives success and innovation. So, let's continue building this culture layer by layer together.
Estimating work is hard as it is. Using dates over story points as a deciding factor can add even more complications, as they rarely account for the work you need to do outside of actual work, like emails, meetings, and additional research. Dates are also harder to measure in terms of velocity making it harder to estimate how much effort a body of work takes even if you have previous experiences. Story points, on the other hand, can bring more certainty and simplify planning in the long run… If you know how to use them. What Are Story Points in Scrum? Story points are units of measurement that you use to define the complexity of a user story. In simpler words, you’ll be using a gradation of points from simple (smallest) to hardest (largest) to rank how long you think it would take to complete a certain body of work. Think of them as rough time estimates of tasks in an agile project. Agile teams typically assign story points based on three major factors: The complexity of work; The amount of work that needs to be done; And the uncertainty in how one could tackle a task. The less you know about how to complete something, the more time it will take to learn. How to Estimate a User Story With Story Points Ok, let’s take a good look at the elephant in the room: There’s no one cut and dry way of estimating story points. The way we do it in our team is probably different from your estimation method. That’s why I will be talking about estimations on a more conceptual level making sure anyone who’s new to the subject matter can understand the process as a whole and then fine-tune it to their needs. T-shirt size Story Point Time to deliver work XS 1 Minutes to 1-2 hours S 2 Half a day M 3 1-2 days L 5 Half a week XL 8 Around 1 week XXL 13 More than 1 week XXXL 21 Full Sprint Story point vs. T-shirt size Story Points of 1 and 2 Estimations that seem the simplest can sometimes be the trickiest. For example, if you’ve done something a lot of times and know that this one action shouldn’t take longer than 10-15 minutes, then you have a pretty clear one-pointer. That being said, the complexity of a task isn’t the only thing you need to consider. Let’s take a look at fixing a typo on a WordPress-powered website as an example. All you need to do is log into the interface, find the right page, fix the typo, and click publish. Sounds simple enough. But what if you need to do this multiple times on multiple pages? The task is still simple, but it takes a significantly longer amount of time to complete. The same can be said about data entry and other seemingly trivial tasks that can take a while simply due to the number of actions you’ll need to perform and the screens you’ll need to load. Story Point Estimation in Complex User Stories While seemingly simple stories can be tricky, the much more complex ones are probably even trickier. Think about it: If your engineers estimate, they’ll probably need half a week to a week to complete one story; there’s probably a lot they are still uncertain of in regards to implementation, meaning a story like that could take much longer. Then there’s the psychological factor where the team will probably go for the low-hanging fruits first and use the first half of the week to knock down the one, two, and three-pointers. This raises the risk of the five and eight-pointers not being completed during the Sprint. One thing you can do is ask yourself if the story really needs to be as complex as it is now? Perhaps it would be wiser to break it down. You can find out the answer to whether you should break a story using the KISS principle. KISS stands for “Keep It Simple, Stupid” and makes you wonder if something needs to be as complex as it is. Applying KISS is pretty easy too — just ask a couple of simple questions like what is the value of this story and if the same value can be achieved in a more convenient way. “Simplicity is the ultimate sophistication.” –Leonardo Da Vinci How to Use Story Points in Atlassian’s Jira A nice trick I like is to give the team the ability to assign story points to epics. Adding the story points field is nothing too in-depth or sophisticated as a project manager needs the ability to easily assign points when creating epics. The rule of thumb here is to indicate whether your development team is experienced and well-equipped to deliver the epic or whether they would need additional resources and time to research. An example of a simpler epic could be the development of a landing page and a more complex one would be the integration of ChatGPT into a product. The T-shirt approach works like a charm here. While Jira doesn’t have the functionality to add story points to epics by default, you can easily add a checkbox custom field to do the trick. Please note that you’ll need admin permissions to add and configure custom fields in Jira. Assigning story points to user stories is a bit trickier as — ideally — you’d like to take everyone’s experience and expertise into consideration. Why? A project manager can decide the complexity of an epic based on what the team has delivered earlier. Individual stories are more nuanced as engineers will usually have a more precise idea of how they’ll deliver this or that piece of functionality, which tools they’ll use and how long it’ll take. In my experience, T-shirt sizes don’t fit here as well as the Fibonacci sequence. The given sequence exhibits a recurring pattern in which each number is obtained by adding the previous two numbers in the series. The sequence begins with 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, and 89, and this pattern continues indefinitely. This sequence, known as the Fibonacci sequence, is utilized as a scoring scale in Fibonacci agile estimation. It aids in estimating the effort required for agile development tasks. This approach proves highly valuable as it simplifies the process by restricting the number of values in the sequence, eliminating the need for extensive deliberation on complexity nuances. This simplicity is significant because determining complexity based on a finite set of points is much easier. Ultimately, you have the option of selecting either 55 or 89, rather than having to consider the entire range between 55 and 89. As for the collaboration aspect of estimating and assigning story points to user stories, there’s a handy tool called Planning Poker. This handy tool helps the team collaborate on assigning story points to their issues. Here’s the trick: each team member anonymously assigns a value to an issue, keeping their choices incognito. Then, when the cards are revealed, it’s fascinating to see if the team has reached a consensus on the complexity of the task. If different opinions emerge, it’s actually a great opportunity for engaging in discussions and sharing perspectives. The best part is, this tool seamlessly integrates with Jira, making it a breeze to incorporate into your existing process. It’s all about making teamwork smoother and more efficient! How does the process of assigning story points work? Before the Sprint kicks off — during the Sprint planning session — the Scrum team engages in thorough discussions regarding the tasks at hand. All the stories are carefully reviewed, and story points are assigned to gauge their complexity. Once the team commits to a Sprint, we have a clear understanding of the stories we’ll be tackling and their respective point values, which indicate their significance. As the Sprint progresses, the team diligently works on burning down the stories that meet the Definition of Done by its conclusion. These completed stories are marked as finished. For any unfinished stories, they are returned to the backlog for further refinement and potential re-estimation. The team has the option to reconsider and bring these stories back into the current Sprint if deemed appropriate. When this practice is consistently followed for each sprint, the team begins to understand their velocity — a measure of the number of story points they typically complete within a Sprint — over time. It becomes a valuable learning process that aids in product management, planning, and forecasting future workloads. What Do You Do With Story Points? As briefly mentioned above — you burn them throughout the Sprint. You see, while story points are good practice for estimating the amount of work you put in a Sprint, Jira makes them better with Sprint Analytics showing you the amount of points you’ve actually burned through the Sprint and comparing it to the estimation. These metrics will help you improve your planning in the long run. Burndown chart: This report tracks the remaining story points in Jira and predicts the likelihood of completing the Sprint goal. Burnup chart: This report works as an opposite to the Burndown chart. It tracks the scope independently from the work done and helps agile teams understand the effects of scope change. Sprint report: This report analyses the work done during a Sprint. It is used to point out either overcommitment or scope creep in a Jira project. Velocity chart: This is a kind of bird’s eye view report that shows historic data of work completed from Sprint to Sprint. This chart is a nice tool for predicting how much work your team can reliably deliver based on previously burned Jira story points. Add Even More Clarity to Your Stories With a Checklist With a Jira Checklist, you have the ability to create practical checklists and checklist templates. They come in handy when you want to ensure accountability and consistency. This application proves particularly valuable when it comes to crafting and enhancing your stories or other tasks and subtasks. It allows you to incorporate explicit and visible checklists for the Definition of Done and Acceptance Criteria into your issues, giving you greater clarity and structure. It’s ultimately a useful tool for maintaining organization and streamlining your workflow with automation. Standardization isn’t about the process. It’s about helping people follow it.
Your approach to DevOps is likely to be influenced by the methods and practices that came before. For organizations that gave teams autonomy to adapt their process, DevOps would have been a natural progression. Where an organization has been more prescriptive in the past, people will look for familiar tools to run a DevOps implementation, such as maturity models. In this article, I explain why a maturity model isn't appropriate and what you should use instead. What Is a Maturity Model? A maturity model represents groups of characteristics, like processes or activities, into a sequence of maturity levels. By following the groups from the easiest to the most advanced, an organization can implement all the required elements of the model. The process is a journey from adoption through to maturity. Maturity models: Provide a roadmap for adopting characteristics Make it easier to get started by suggesting a smaller initial set of characteristics Can be assessed to provide the organization with a maturity score For example, a maturity model for riding a bike might have 5 levels of maturity: Walk upright on 2 legs Ride a balance bike with a walking motion Ride a balance bike with both feet off the ground Ride a pedal bike from a starting point facing downhill Ride a pedal bike from a starting point facing uphill The sequence of maturity levels is a useful roadmap to follow and you may already be able to achieve the lower levels. Each maturity level is easier to reach from the level below, as the earlier levels provide a basis for increasing your skills and progressing to the next stage. You can also assess someone by asking them to demonstrate their ability at each level. You can create a maturity model by designing the levels first and expanding each with characteristics, or you can collect together all the characteristics before arranging them into levels. You'll find maturity models are commonly used as part of standards and their certification process. Most process certifications require you to demonstrate that: You have documented your process People follow the documented process You regularly review and improve the process When you plan to achieve a certification, your roadmap is clear; until you document the process you can't tell if people are following it. Limitations of Maturity Models You can use a maturity model to assess whether a set of activities is taking place, but not whether these activities impact your key outcomes. Maturity models are rigid and require you to adopt all characteristics to achieve maturity levels. You have to trust that following the model will bring you the same benefits experienced by the model's authors. The sequence of maturity levels might not work for everyone. They could slow down your progress or even have counter-productive outcomes. A maturity model doesn't take into account the unique challenges facing your business — it may not even solve the kind of problems you're facing. It also defines an end point that may not be good enough. Maturity models are most commonly used in due-diligence frameworks to ensure suppliers meet a minimum standard for process or security. If you were cynical, you might argue they're used to ensure an organization can't be blamed when one of its suppliers makes a mistake. In DevOps, the context and challenges faced by organizations and teams are so important, a maturity model is not an appropriate tool. If you want to apply a maturity model to DevOps, you may need to adjust your mindset and approach as there's no fixed end state to DevOps. Neither should the capabilities be adopted in a pre-determined order. Maturity models are not appropriate for DevOps because they: Assume there is a known answer to your current context Focus on arriving at a fixed end state Encourage standardization, not innovation and experimentation Have a linear progression Are activity-based For DevOps, you need a different kind of model. Capability Models A capability model describes characteristics in terms of their relationship to an outcome. Rather than arrange sets of characteristics into levels, they connect them to the effect they have on a wider system outcome. Going back to riding a bike, a capability model would show that balance affects riding stability and steering, whereas walking has some bearing on the ability to pedal to power the bicycle. Instead of following the roadmap for learning to ride a bike, you would identify areas that could be improved based on your current attempts to ride. If you were using a capability model, you wouldn't stop once you proved you could ride uphill. Capability models encourage you to continue your improvement efforts, just like Ineos Grenadiers (formerly Sky Professional Racing/Team Sky) who achieved 7 Tour de France wins in their first 10 years using their approach to continuous improvement, which they called marginal gains. A capability model: Focuses on continuous improvement Is multi-dimensional, dynamic, and customizable Understands that the landscape is always changing Is outcome-based When you use a capability model, you accept that high performance today won't be sufficient in the future. Business, technology, and competition are always on the move and you need a mindset that can keep pace. Maturity vs. Capability Models A maturity model tends to measure activities, such as whether a certain tool or process has been implemented. In contrast, capability models are outcome-based, which means you need to use measurements of key outcomes to confirm that changes result in improvements. For example, the DevOps capability model is aligned with the DORA metrics. Using throughput and stability metrics helps you assess the effectiveness of improvements. While maturity models tend to focus on a fixed standardized list of activities, capability models are dynamic and contextual. A capability model expects you to select capabilities that you believe will improve your performance given your current goals, industry, organization, team, and the scenario you face at this point in time. You level up in a maturity model based on proficiency against the activities. In a capability model, you constantly add gains as you continuously improve your skills and techniques. These differences are summarized below: Maturity model Capability model Activity-based Outcome-based Fixed Dynamic Standardized Contextual Proficiency Impact The DevOps Capability Model The DevOps capability model is the structural equation model (SEM), sometimes referred to as the big friendly diagram (BFD). It arranges the capabilities into groups and maps the relationships they have to outcomes. Each of the arrows describes a predictive relationship. You can use this map to work out what items will help you solve the problems you're facing. For example, Continuous Delivery depends on several technical capabilities, like version control and trunk-based development, and leads to increased software delivery performance and reduced burnout (among other benefits). If you find this version of the model overwhelming, the 2022 version offers a simpler view, with many of the groups collapsed. Using simplified views of the model can help you navigate it before you drill into the more detailed lists of capabilities. How to Use the DevOps Model Depending on which version you look at, the model can seem overwhelming. However, the purpose of the model isn't to provide a list of all the techniques and practices you must adopt. Instead, you can use the model as part of your continuous improvement process to identify which capabilities may help you make your next change. As the capability model is outcome-based, your first task is finding a way to measure the outcomes for your team and organization. Any improvement you make should eventually move the needle on these outcomes, although a single capability on its own may not make a detectable difference. The DORA metrics are a good place to start, as they use throughput and stability metrics to create a balanced picture of successful software delivery. In the longer term, it's best to connect your measurements to business outcomes. Whatever you measure, everyone involved in software delivery and operations needs to share the same goals. After you can measure the impact of changes, you can review the capability model and select something you believe will bring the biggest benefit to your specific scenario. The highest performers use this process of continuous improvement to make gains every year. The high performers are never done and persistently seek new opportunities to build performance. This is why the high performance of today won't be enough to remain competitive in the future. Conclusion DevOps shouldn't be assessed against a maturity model. You should be wary of anyone who tries to introduce one. Instead, use the structural equation model from Accelerate and the State of DevOps reports as part of your continuous improvement efforts. The DevOps capability model supports the need for constant incremental gains and encourages teams to experiment with their tools and processes. Happy deployments!
In the dynamic world of business, Agile methodologies have become increasingly popular as organizations seek to deliver high-quality products and services more efficiently. As Agile practices gain traction, it is crucial to measure the progress, quality, and performance of Agile projects to ensure their success. This article will delve into various Agile metrics and key performance indicators (KPIs) by providing real-world examples that can help organizations track and evaluate their Agile projects' effectiveness. Understanding Agile Metrics and KPIs With Examples Agile metrics and KPIs are quantifiable measures that offer insights into an Agile project's progress, performance, and quality. They assist teams in identifying areas for improvement, tracking progress toward goals, and ensuring the project remains on track. By gathering and analyzing these metrics, organizations can make data-driven decisions, optimize their processes, and ultimately achieve better results. Key Agile Metrics and KPIs With Examples Velocity This metric measures the amount of work a team completes during a sprint or iteration. It is calculated by adding up the story points or effort estimates for all completed user stories. For example, if a team completes five user stories worth 3, 5, 8, 2, and 13 story points, their velocity for that sprint would be 31. Velocity helps teams understand their capacity and predict how much work they can complete in future sprints. Burn-Up and Burn-Down Charts These charts visualize the progress of a sprint or project by showing the amount of work completed (burn-up) and the remaining work (burn-down). For instance, if a team has a sprint backlog of 50 story points and completes ten story points per day, the burn-down chart will show a decreasing slope as the team progresses through the sprint. These charts help teams monitor their progress toward completing the sprint backlog and provide an early warning if the project is off track. Cycle Time This metric measures the time it takes for a user story to move from the start of the development process to completion. Suppose a team begins working on a user story on Monday and completes it on Thursday. In that case, the cycle time for that story is four days. A shorter cycle time indicates that the team is delivering value to customers more quickly and is a sign of efficient processes. Lead Time This metric measures the time it takes for a user story to move from the initial request to completion. It includes both the time spent waiting in the backlog and the actual development time. For example, if a user story is added to the backlog on January 1st and is completed on January 15th, the lead time would be 15 days. Reducing lead time can help improve customer satisfaction and reduce the risk of scope changes. Cumulative Flow Diagram (CFD) A CFD is a visual representation of the flow of work through a team's process. It shows the amount of work in each stage of the process, such as "To Do," "In Progress," and "Done." By analyzing a CFD, teams can identify bottlenecks, inefficiencies, and areas for improvement. For example, if the "In Progress" stage consistently has a large number of items, it may indicate that the team is struggling with capacity or that work is not moving smoothly through the process. Defect Density This metric measures the number of defects found in a product relative to its size (e.g., lines of code or story points). Suppose a team delivers a feature with 1000 lines of code and discovers ten defects. In that case, the defect density is 0.01 defects per line of code. A lower defect density indicates higher-quality software and can help teams identify areas where their quality practices need improvement. Escaped Defects This metric tracks the number of defects discovered after a product has been released to customers. For example, if a team releases a new mobile app and users report 15 bugs within the first week, the team would have 15 escaped defects. A high number of escaped defects may indicate inadequate testing or quality assurance processes. Team Satisfaction Measuring team satisfaction through regular surveys helps gauge team morale and identify potential issues that could impact productivity or project success. For example, a team might be asked to rate their satisfaction with factors such as communication, workload, and work-life balance on a scale of 1 to 5, with 5 being the highest satisfaction level. Customer Satisfaction Collecting customer feedback on delivered features and overall product quality is crucial for ensuring that the project meets customer needs and expectations. For instance, a company might send out surveys to customers asking them to rate their experience with a new software feature on a scale from 1 (very dissatisfied) to 5 (very satisfied). Business Value Delivered This metric measures the tangible benefits a project delivers to the organization, such as increased revenue, cost savings, or improved customer satisfaction. For example, an Agile project might deliver a new e-commerce feature that results in a 10% increase in online sales, representing a clear business value. Using Agile Metrics and KPIs Effectively To maximize the benefits of Agile metrics and KPIs, organizations should: Choose the right metrics: Select metrics relevant to the project's goals and objectives, focusing on those that drive improvement and provide actionable insights. Establish baselines and targets: Identify current performance levels and set targets for improvement to track progress over time. Monitor and analyze data: Regularly review metric data to identify trends, patterns, and areas for improvement. Make data-driven decisions: Use metric data to inform decision-making and prioritize actions with the most significant impact on project success. Foster a culture of continuous improvement: Encourage teams to use metrics as a tool for learning and improvement rather than as a means of punishment or control. Conclusion Agile metrics and KPIs play a critical role in ensuring the success of Agile projects by providing valuable insights into progress, performance, and quality. By selecting the right metrics, monitoring them regularly, and using the data to drive continuous improvement, organizations can optimize their Agile processes and achieve better results. Real-world examples help illustrate the practical applications of these metrics, making it easier for teams to understand their importance and implement them effectively.
As a Product Owner (PO), your role is crucial in steering an agile project toward success. However, it's equally important to be aware of the pitfalls that can lead to failure. It's worth noting that the GIGO (Garbage In - Garbage Out) effect is a significant factor: No good product can come from bad design. On Agile and Business Design Skills Lack of Design Methodology Awareness One of the initial steps towards failure is disregarding design methodologies such as Story Mapping, Event Storming, Impact Mapping, or Behavioral Driven Development. Treating these methodologies as trivial or underestimating their complexity or power can hinder your project's progress. Instead, take the time to learn, practice, and seek coaching in these techniques to create well-defined business requirements. For example, I once worked on a project where the PO practiced Story Mapping without even involving the end-users... Ignoring Domain Knowledge Neglecting to understand your business domain can be detrimental. Avoid skipping internal training sessions, Massive Open Online Courses (MooCs), and field observation workshops. Read domain reference books and, more generally, embrace domain knowledge to make informed decisions that resonate with both end-users and stakeholders. To continue with the previous example, the PO who was new in the project domain field (although having basic knowledge) missed an entire use-case with serious architectural implications due to a lack of skills, requiring significant software changes after only a few months. Disregarding End-User Feedback Overestimating your understanding and undervaluing end-user feedback can lead to the Dunning-Kruger effect. Embrace humility and actively involve end-users in the decision-making process to create solutions that truly meet their needs. Failure to consider real-world user constraints and work processes can lead to impractical designs. Analyze actual and operational user experiences, collect feedback, and adjust your approach accordingly. Don't imagine their requirements and issues, but ask actual users who deal with real-world complexity all the time. For instance, a PO I worked with ignored or postponed many obvious GUI issues from end-users, rendering the application nearly unusable. These UX issues included the absence of basic filters on screens, making it impossible for users to find their ongoing tasks. These issues were yet relatively simple to fix. Conversely, this PO pushed unasked-for features and even features rejected by most end-users, such as complex GUI locking options. Furthermore, any attempt to set up tools to collect end-user feedback was dismissed. Team Dynamics Centralized Decision-Making Isolating decision-making authority within your hands without consulting IT or other designers can stifle creativity and collaboration. Instead, foster open communication and involve team members in shaping the project's direction. The three pillars of agility, as defined in the Agile Manifesto, are Transparency, Inspection, and Adaptation. The essence of an agile team is continuous improvement, which becomes challenging when a lack of trust hinders the identification of real issues. Some POs unfortunately adopt a "divide and rule" approach, which keeps knowledge and power in their sole hands. I have observed instances where POs withheld information or even released incorrect information to both end-users and developers, and actively prevented any exchange between them. Geographical Disconnection Geographically separating end-users, designers, testers, PO and developers can hinder communication. Leverage modern collaboration tools, but don't rely solely on them. Balance digital tools with face-to-face interactions to maintain strong team connections and enables osmotic communication, which has proven to be highly efficient in keeping everyone informed and involved. The worst case I had to deal with was a project where developers were centralized in the same building as the end-users, while the PO and design team were distributed in another city. Most workshops were done remotely between both cities. In the end, the design result was very poor. It improved drastically when some designers were finally collocated with the end-users (and developers) and were able to conduct in situ formal and informal workshops. Planning and Execution Over-Optimism and Lack of Contingency Plans Hope should not be your strategy. Don't overselling features to end-users. Being overly optimistic and neglecting backup plans can lead to missed deadlines and unexpected challenges. Develop robust contingency plans (Plan B) to navigate uncertainties effectively. Avoid promising unsustainable plans to stakeholders. After two or three delays, they may lose trust in the project. I worked on a project where the main release was announced to stakeholders by the PO every two months over a 1.5-year timeline without consulting the development team. As you can imagine, the effect was devastating over the image of the project. Inadequate Stakeholder Engagement Excluding business stakeholders from demos and delaying critical communications can lead to misunderstandings and misaligned expectations. Regularly engage stakeholders to maintain transparency and gather valuable feedback. As an illustration, in a previous project, we conducted regular sprint demos; however, we failed to invite end-users to most sessions. Consequently, significant ergonomic issues went unnoticed, resulting in a substantial loss of time. Additionally, within the same project, the Product Owner (PO) organized meetings with end-users mainly to present solutions via fully completed mockups, rather than facilitating discussions to precisely identify operational requirements, which inhibited them. Embracing Waterfall Practices Thinking in terms of a waterfall approach, rather than embracing iterative development, can hinder progress, especially on a project meant to be managed with agile methodologies. Minimize misunderstandings by providing regular updates to stakeholders. Break features into increments, leverage Proof of Concepts (POC), and prioritize the creation of Minimal Viable Products (MVP) to validate assumptions and ensure steady progress. As an example, I recently had a meeting with end-users explaining that a one-year coding tunnel period resulted in a first application version almost unusable and worse than the 20-year-old application we were supposed to rewrite. With re-established communication and end-users' involvement, this has been fixed in a few months. Producing Too Much Waste As a designer, avoid creating a large stock of user stories (US) that will be implemented in months or years. This way, you work against the Lean principle to fight the overproduction muda (waste) and you produce many specifications at the worst moment (when knowing the least about actual business requirements), and this work has all chances to be thrown away. I had an experience where a PO and their designer team wrote US until one year before they were actually coded and left almost unmaintained. As expected, most of it was thrown away or, even worse, caused various flaws and misunderstandings among the development team when finally planned for the next sprint. Most backlog refinements and explanations had to be redone. User stories should be refined to a detailed state only one or two sprints before being coded. However, it's a good practice to fill the backlog sandbox with generally outlined features. The rule of thumb is straightforward: user stories should be detailed as close to the coding stage as possible. When they are fully detailed, they are ready for coding. Otherwise, you are likely to waste time and resources. Volatile Objectives Try to set consistent objectives at each sprint. Avoid context switching among developers, which can prevent them from starting many different features but never finishing any. To provide an example, in a project where the Product Owner (PO) interacted with multiple partners, priorities were altered every two or three sprints mainly due to political considerations. This was often done to appease the most frustrated partners who were awaiting certain features (often promised with unrealistic deadlines). Lack of Planning Flexibility Utilize the DevOps methodology toolkit, including tools such as feature flags, dark deployments, and canary testing, to facilitate more streamlined planning and deployment processes. As an architect, I once had a tough time convincing a PO to use canary-testing deployment strategy to learn fast and release early while greatly limiting risks. After a resounding failure when opening the application to the entire population, we finally used canary-testing and discovered performance and critical issues on a limited set of voluntary end-users. It is now a critical aspect of the project management toolkit we use extensively. Extended Delays Between Deployments Even if a product is built incrementally within 2 or 3-week timeframes, many large projects (including all those I've been a part of) tend to wait for several iterations before deploying the software in production. This presents a challenge because each iteration should ideally deliver some form of value, even if it's relatively small, to end-users. This approach aligns with the mantra famously advocated by Linus Torvalds: "Release early, release often." Some Product Owners (PO) are hesitant to push iterations into production, often for misguided reasons. These concerns can include fears of introducing bugs (indicating a lack of automated and acceptance testing), incomplete iterations (highlighting issues with user story estimation or development team velocity), a desire to provide end-users with a more extensive set of features in one go, thinking they'll appreciate it, or an attempt to simplify the user learning curve (revealing potential user experience (UX) shortcomings). In my experience, this hesitation tends to result in the accumulation of various issues, such as bugs or performance problems. Design Considerations Solution-First Mentality Prioritizing solutions over understanding the business needs can lead to misguided decisions. Focus on the "Why" before diving into the "How" to create solutions that truly address user requirements. As a bad practice, I've seen user stories including technical content (like SQL queries) or presenting detailed technical operations or screens as business rules. Oversized User Stories Designing large, complex user stories instead of breaking them into manageable increments can lead to confusion and delays. Embrace smaller, more focused user stories to facilitate smoother development, predictability in planning, and testing. Inexperienced Product Owners (POs) often find it challenging to break down features into small, manageable user stories (US). This is sort of an art, and there are numerous ways to accomplishing it based on the context. However, it's important to remember that each story should deliver value to end-users. As an example, in a previous project, the Product Owner (PO) struggled to effectively divide stories or engaged in purely technical splitting, such as creating one user story (US) for the frontend and another for the backend portion of a substantial feature. Consequently, 50% of the time, this resulted in incomplete user stories that required rescheduling for the subsequent sprint. Neglecting Expertise Avoiding consultation with experts such as UX designers, accessibility specialists, and legal advisors can result in suboptimal solutions. Leverage their insights to create more effective and user-friendly designs. As a case in point, I've observed multiple projects where the lack of a proper user experience (UX) led to inadequately designed graphical user interfaces (GUIs), incurring substantial costs for rectification at a later stage. In specific instances, certain projects demanded legal expertise, particularly in matters of data privacy. Moreover, I encountered a situation where a Product Owner (PO) failed to involve legal specialists, resulting in the final product omitting crucial legal notices or even necessitating significant architectural revisions. Ignoring Performance Considerations Neglecting performance constraints, such as displaying excessive data on screens without filters, can negatively impact user experience. Prioritize efficient design to ensure optimal system performance. I once worked on a large project where the Product Owner (PO) requested the computation of a Gantt chart involving tens of thousands of tasks spanning over 5 years. Ironically, in 99.9% of cases, a single week was sufficient. This unnecessarily intricate requirement significantly complicated the design process and resulted in the product becoming nearly unusable due to its excessive slowness. Using the Wrong Words Failing to establish a shared business language and glossary can create confusion between technical and business teams. Embrace the Ubiquitous Language (UL) Domain-Driven Design principle to enhance communication and clarity. I once worked on a project where PO and designers didn't set up any business terms glossary, used custom vocabulary instead of a business one, and used fuzzy or interchangeable synonyms even for the terms they coined themselves. This created many issues and confusion among the team or end-users and even duplicated work. Postponing Legal and Regulatory Considerations Late discovery of legal, accessibility, or regulatory requirements can lead to costly revisions. Incorporate these considerations early to avoid setbacks during development. I observed a significantly large project where the Social Security number had to be eliminated later on. This led to the need for additional transformation tools since this constraint was not taken into account from the beginning. Code Considerations Interferences Refine business requirements and don't interfere with code organization, which often has its own constraints. For instance, asking the development team to always enforce the reuse (DRY) principle through very generic interfaces comes from a good intention but may greatly overcomplicate the code (which violates the KISS principle). In a recent project, a Product Owner (PO) who had a background in development frequently complicated the design by explicitly instructing developers to extend existing endpoints or SQL queries instead of creating entirely new ones, which would have been simpler. Many developers followed the instructions outlined in the user stories (US) without fully grasping the potential drawbacks in the actual implementation. This occasionally resulted in convoluted code and wasted time rather than achieving efficiency gains. Acceptance Testing Neglecting Alternate Paths Focusing solely on nominal cases (“happy paths”) and ignoring real-world scenarios can result in very incomplete testing. Ensure that all possible paths, including corner cases, are thoroughly tested to deliver a robust solution. In a prior project, a multitude of bugs and crashes surfaced exclusively during the production phase due to testing being limited to nominal scenarios. This led to team disorganization as urgent hotfixes had to be written immediately, tarnishing the project's reputation and incurring substantial costs. Missing Acceptance Criteria Leverage the Three Amigos principle to involve cross-functional team members in creating comprehensive acceptance criteria. Incorporate examples in user stories to clarify expectations and ensure consistent understanding. Example mapping is a great workshop to achieve it. Being able to write down examples ensures many things: firstly that you have at least one realistic case for this requirement and that it is not imaginary; secondly, listing different cases is a powerful tool to gain an estimation of the alternate paths exhaustively (see the previous point) and make them emerge; lastly, it is one of the best common understanding material you can share with developers. By way of illustration, when designers began documenting real-life scenarios using Behavioral Driven Development (BDD) executable specifications, numerous alternate paths emerged naturally. This led to a reduction in production issues (as discussed in the previous section) and a gradual slowdown in their occurrence. Lack of Professional Testing Expertise Incorporating professional testers and testing tools enhances defect detection and overall quality. Invest in thorough testing to identify issues early, ensuring a smoother user experience. Not using tools also makes it more difficult for external stakeholders to figure out what has been actually tested. Conducting rigorous testing is indeed a genuine skill. In a previous project, I witnessed testers utilizing basic spreadsheets to record and track testing scenarios. This approach rendered it difficult to accurately determine what had been tested and what hadn't. Consequently, the Product Owner (PO) had to validate releases without a clear understanding of the testing coverage. Tools like the Open Source SquashTM are excellent for specifying test requirements and monitoring acceptance tests coverage. Furthermore, the testers were not testing professionals but rather designers, which frequently resulted in challenges when trying to obtain detailed bug reports. These reports lacked precision, including crucial information such as the exact time, logs, scenarios, and datasets necessary for effective issue reproduction. Take-Away Summary Symptom Possible Causes and Solutions A solution that is not aligned with end-users' needs. Ineffective Workshops with End-Users:- If workshops are conducted remotely, consider organizing them onsite.- Ensure you are familiar with agile design methods like Story Mapping.Insufficient Attention to End-Users' Needs:- Make sure to understand the genuine needs and concerns of end-users, and avoid relying solely on personal intuitions or managerial opinions.- Gather end-users' feedback early and frequently.- Utilize appropriate domain-specific terminology (Ubiquitous Language). Limited Trust from End-Users and/or Development Team. Centralized Decision-Making:- Foster open communication and involve team members in shaping the project's direction.- Enhance transparency through increased communication and information sharing.Unrealistic Timelines:- Remember that "Hope is not a strategy"; avoid excessive optimism.- Aim for consistent objectives in each sprint and establish a clear trajectory.- Employ tools that enhance schedule flexibility and ensure secure production releases, such as canary testing. Design Overhead. User story overproduction:- Minimize muda (waste) and refine user stories only when necessary, just before they are coded.Challenges in Designer-Development Team Communication:- Encourage regular physical presence of both design and development teams in the same location, ideally several days a week, to enhance direct and osmotic communication.- Focus on describing the 'why' rather than the 'how'. Leave technical specifications to the development team. For instance, when designing a database model, you might create the Conceptual Data Model, but ensure the team knows it's not the Physical Data Model. Discovery of Numerous Production Bugs. Incomplete Acceptance Testing:- Develop acceptance tests simultaneously with the user stories and in collaboration with future testers.- Conduct tests in a professional and traceable manner, involving trained testers who use appropriate tools.- Test not only the 'happy paths' but also as many alternative paths as possible.Lack of Automation:- Implement automated tests, especially unit tests, and equally important, executable specifications (Behavioral Driven Development) derived from the acceptance tests outlined in the user stories. Explore tools like Spock. Conclusion By avoiding these common pitfalls, you can significantly increase the chances of a successful agile project. Remember, effective collaboration, clear communication, and a user-centric mindset are key to delivering valuable outcomes. A Product Owner (PO) is a role, not merely a job. It necessitates training, support, and a readiness to continuously challenge our assumptions. It's worth noting that a project can fail even with good design when blueprints and good coding practices are not followed, but this is an entirely different topic. However, due to the GIGO effect, no good product can ever be released from a bad design phase.
I’ve been working in IT in various roles for more than 12 years, and I witnessed and experienced how release cycles became faster and faster over time. Seeing recent (and not so recent) trends in competitiveness, degradation of attention span, advertisement of short-term satisfaction, etc., I wonder where this increase in speed will lead in terms of software quality, end-user experience and satisfaction, engineer burnout, and whether it is possible to slow down a little. What Do We Want? Anything! When Do We Want It? Yesterday! Two things come to my mind regarding this topic: shortened attention span and the want for short-term perceived satisfaction. I hear people complain about a shortened attention span, which is due to many various factors like social media (see products like TikTok, YouTube Shorts, and many others), and I feel as if more and more products pop up that promote and worsen this. At the same time, I also see people wanting and promoting short-term or immediate satisfaction, be it anything really. With online purchases, same-day shipping, online trading, 30-40 year mortgages to get that house now, and more, people nowadays want everything, and they want it yesterday. Long-term planning and long-term satisfaction are becoming less of a thing, and that is, to a degree, mirrored by the IT sector as well. Drawing a parallel with software releases: nowadays, continuous delivery and deployment are very prominent along with containerization, and with all that automation, production deployments can happen every hour, every 30 minutes, or even less. Although it is definitely a good thing for deploying bug fixes or security fixes, I wonder how much benefit it really has when it comes to deploying features and feature improvements. Is the new code deployed actually benefit your end-users (whether it's for competing with another company or not), benefit the company's engineers for improving/laying the foundation for further development (I see no problem with this), or is it just due to a perceived/false urgency coming from higher-ups, or from the general state of the current digital market, to "compete" with others. Does One Actually Know What They Want? In childhood, we are taught (ideally) that we don't necessarily get everything we want, and not necessarily when we want it, but we have to work hard for many things, and the results and success will materialize sometime in the future, and as periodic little successes along the way. I separate wants into two categories: whether your customers know what they want and whether you or your company know what your customers want. As for the earlier: when your customers are satisfied with your product, that is awesome. But when you always give them everything they want, and/or immediately, you might end up with cases like this: it is a problem that a certain video game is not released in time, but it is also a problem if released in time due to pressure, with questionable quality. So, they don't even necessarily know what they want. (If you are a Hungarian reader, you might be familiar with the song called Az a baj from Bëlga). As for the latter, at least in the world of outsourcing, though, there is a distinction: the client pays you for delivering what they want to their customers, but your children don't. Clients might leave like an offended child (and might never come back) if they don't get what they want when they want it, even if that thing is unrealistic or, even more, not so legal. But that is something that can be shaped by great managers, Project Owners, Business Analysts, etc. On the Topic of Analytics It is one thing that end-users, like when in a candy store, don't necessarily know what they want. But do companies themselves actually know what their customers want, or do they just throw things at the wall, hoping that something would stick? Companies can get their hands on so much analytics data that it's not an easy feat to process it, organize it, and actually do something with it. If companies don't have the (right) people, or they burn out and leave, they would have no proper clue of what their customers want and in what way. This isn't helped by the fact that, in some cases, there may be a large amount of inconclusive or contradictory information on how certain features are received by customers. Take, for example, one of my previous projects: there were several hundreds of active multi-variant tests running on the site, in many cases with 5, 6, or more variants, at the same time, along with tens of active SiteSpect tests in the same area. How you draw conclusive results from them regarding customer conversion rate and what works for your users is magic to me. These all can result in potentially premature decision-making and can lead to uncertainty. What About Slowing Down? It may be a good idea to consider taking a step back, getting some perspective on how your project operates, and slowing down the release cadence a little. Whether it is better care for your customers and your project members, or you just want the project to get more money from users, releasing less frequently may be a good idea... if you can reallocate time, money, and effort properly. I know slowing down the release cadence is not applicable to all domains and projects, and it might be suitable differently for small, big, open-source, greenfield, startup, commercial, etc. type projects. But the option is still there, and it doesn't hurt to think about its potential benefits. A Personal Example I personally develop plugins for JetBrains IDEs, and I like to bulk-release new features and improvements. It gives me more time and headspace to come up with ideas to further polish features so users can also get a better version of them right away instead of getting them in chunks. This also produces less technical-/feature-debt for me and less paperwork, e.g., for tracking and maintaining issues on GitHub. It also saves me time because I have to go through the release process less frequently. I produce fewer release-related artifacts over time compared to, for example, releasing each new feature individually. And it produces fewer plugin update cycles for IDE users as well. However, when it comes to important or critical bug fixes, I do release them quickly. Question Time I know it can work quite differently for companies with hundreds of engineers finishing features and code changes each day and wanting to release those features in the wild soon after they have finished. But, even in that case, it might be a good idea to slow down a bit. It kind of reminds me of publishing music: you can either drop a hit single now and then and then later wrap it and call it an "album," or you can work on a bigger theme for a while and publish them at once as a real album. I also realize in regulated domains, frequent and fast releases are required to keep up with regulations and legal matters. (I still remember when the cookie policy and GDPR had dropped.) Now, let me pose a few questions you might want to explore deeper: What if you would release less frequently? Do you need a release every minute/hour/day? Would a one-week/two-week/one-month/... cadence be better suited for your project? Would it make sense for your domain and project? Do you actually have the option to do so? What advantages and disadvantages would it have on your engineering and testing teams' mental health? Would they feel more ownership of the features they develop? If you have manual testing for each release, would less frequent releases have a positive effect on your testing team? Would they have more time to focus on less tedious tasks? Would they have more time to migrate manual test cases to automated ones? What effect would it have on your end-users/customers' satisfaction? On your customers' conversion rate? On your infrastructure and code/feature quality? On your business in terms of customer retention, income, and growth? If you are in a cloud infrastructure, would less frequent deployments lower the costs of operation? Let me also get a little deeper into two specific items from this list that I have had experience with before. Feeling Ownership of the Product I'm sure you've been in a situation, or at least heard someone, after finishing a bigger feature, say something like, "We could also do this and this!", "We should also add this!" and their variations. When you don't have the time to enjoy what you've accomplished, and you are instantly thinking about the next steps, it's difficult to feel ownership of the feature you've just finished. If engineers had the option to work longer on a feature and release it in a better state in fewer releases, I think that feeling of ownership and accomplishment could become stronger. (Unless you cannot relate to the feature or the whole domain at all for any reason.) On a related note, I recommend you watch the YouTube video called Running a Marathon, one mile every hour from Beau Miles, from which my favorite quote is, "Been meaning to do this for two years. 10-minute job." Manual Release Regression Years ago, I was on a project in the hotel accommodation domain, on which the website we developed was also used as a platform so other companies could pay our project's company to create branded websites for them. The creation and configuration of these sites were handled by a dedicated team, but testing affected many other teams. Almost every two weeks, we received an email from that team asking us if we would be kind enough to "do a slight regression" (automated and manual) in our area on those (up to ~20) new branded sites and/or points of sale. I know that the site creation was an on-demand process, and it was necessary to serve those customers quickly. That is fine. However, many people would have benefited if those manual regressions occurred, e.g., only once in a month instead of bi-weekly. Even with a bigger scope and workload, it would have required less effort from our team (context switching, reorganizing sprint activities, communication overhead, etc.), especially since there wasn't always a heads-up about when we would have had to do it. Thus, not much planning could have happened beforehand. Closing Thoughts There could have been many more aspects mentioned and taken into account, but covering every aspect of this topic was not the intention of this article. Hopefully, you leave with some of these questions and ideas, making you think about how to answer them or apply them to your projects. This article came to life with the help of Zoltán Limpek. I appreciate your feedback and help.
Stakeholders often revert to resistance to Agile transformations due to fears about job security, perceived loss of control, comfort with established practices, and misconceptions about Agile. However, we can help: Agile practitioners can ease the change process by employing techniques such as empathetic listening, co-creating the change process, introducing incremental changes, offering targeted education, and showcasing internal success stories. Addressing resistance with understanding and respect is pivotal to a successful Agile transformation. Resistance to Agile Transformations: The Unspoken Reasons Agile transformations often meet resistance from long-standing middle managers and other stakeholders. The reasons for this resistance are multifaceted and often misunderstood. Let’s shed light on some of the reasons behind this resistance to Agile transformations: Economic and personal security: Beyond the obvious concern of job security, there’s a more profound fear: “What if I can’t adapt?” This fear encompasses concerns about becoming obsolete in the job market, the potential of diminished respect from peers, and the anxiety of starting from scratch in a new, Agile-centric role. Career investment: When stakeholders have committed decades of their professional lives to a particular method of work or climbing the traditional corporate ladder, it’s not just about the time they’ve spent but also the sacrifices they’ve made. They’ve missed family events for late meetings and pulled all-nighters for projects, to name a few. For them, embracing Agile might feel like admitting those sacrifices were in vain. The emotional toll of considering such a pivot is significant. Comfort in familiarity: Change, by its very nature, is disruptive. The structures, policies, and protocols in place for years represent a known variable. The transformation to Agile is not merely about adopting new practices but involves unlearning established working methods. This ‘unlearning’ can be profoundly unsettling, especially for those who’ve mastered the old ways. Loss of control: Traditional management often involves direct oversight and a transparent chain of command. Agile’s approach, emphasizing team autonomy, can make managers feel sidelined or redundant, leading to concerns about their role’s future relevance. Network and influence: Power dynamics in organizations are complex. Over the years, stakeholders have established informal power structures and alliances, which help them expedite decisions, secure resources, or simply get things done. With its flattened hierarchies, Agile can be seen as a direct threat to these power structures, leading to resistance. Perceived threat to expertise: Agile’s emphasis on cross-functional teams and shared responsibilities can be interpreted as diluting specialization. Stakeholders might fear their unique expertise, which gave them an edge in traditional settings, might be devalued in Agile environments. Cultural misalignment: Traditional corporate cultures often value predictability, risk aversion, and control. Concepts like “embracing change” or “celebrating failure as a learning opportunity” from the Agile world might appear counter-intuitive or reckless to stakeholders steeped in traditional values. Identity crisis: Roles like ‘manager’ or ‘supervisor’ are not just job titles but identities earned over the years. Agile’s blurring of traditional roles can cause stakeholders to grapple with existential professional questions. “If there are no managers in Agile, and I’ve been a manager all my life, where do I fit in?” Distrust in “new trends”: The corporate world has seen its fair share of management fads come and go. For long-standing stakeholders, Agile might seem like just another buzzword that will fade with time. They might be wary of investing time and energy into something they believe has a short shelf-life. Peer pressure: Collective resistance can stem from a shared apprehension of the unknown. If a few influential stakeholders resist the Agile transformation, it can create a ripple effect. Others might join the resistance, not necessarily because they disagree with Agile but because they don’t want to go against the prevailing sentiment. In essence, resistance to Agile transformations isn’t about denying its potential benefits. It’s rooted in human fears and anxieties about change, identity, and security. As Agile practitioners, recognizing and addressing these concerns is crucial to fostering genuine transformation. Addressing them involves promoting the benefits of Agile and empathetically understanding and navigating the human aspect of transformation. How Can Agile Practitioners Support Stakeholders? There are five practices and techniques that Agile practitioners can generally apply to address stakeholders’ resistance to Agile transformations: Empathetic Listening and Open Dialogue Description: This involves creating a safe space for stakeholders to voice their concerns without fear of judgment. By truly understanding the root causes of resistance, practitioners can address them more effectively. Why it works: Often, the fears and concerns of stakeholders are partially confirmed. By acknowledging and addressing these fears directly, Agile practitioners can build trust and show stakeholders their concerns are valued and understood. Co-Creation of the Change Process Description: Instead of imposing change from the top-down, outside-in, or bottom-up, involve resistant stakeholders in shaping the transformation process. Let them have a say in how Agile is adopted and tailored to the organization. Why it works: People are generally more accepting of changes they have a hand in creating. This approach gives them a sense of ownership and reduces the feeling of being subjected to an external force. Incremental Change and Celebrating Small Wins Description: Rather than a wholesale, radical transformation, introduce changes gradually. As each change is implemented, celebrate the small victories and benefits that emerge. Why it works: This makes the transformation less overwhelming and gives stakeholders time to adjust. Celebrating small wins helps to build momentum and showcases the benefits of the transformation. Education and Training Description: Offer workshops, training sessions, and educational resources to help stakeholders better understand Agile principles and practices, thus demystifying Agile and making it more approachable. Why it works: Resistance often stems from a lack of understanding. By providing clear and accessible information, practitioners can alleviate stakeholders’ concerns about Agile. Showcasing Success Stories Description: Highlight teams or departments within the organization that have successfully adopted Agile and are reaping its benefits. Share their stories, challenges, and outcomes with the broader organization. Why it works: Seeing peers succeed with Agile can be a powerful motivator. It provides tangible proof that the transformation can work in the organization’s specific context. Conclusion If you’re an Agile practitioner, think about this: Instead of trying to ‘sell’ Agile, how might you address these deep-rooted fears and concerns directly? How can you build bridges of understanding and trust? Remember, transformation is as much about people as it is about processes. Resistance to Agile transformations is not inevitable. Please share your experience with us in the comments.
Three primary methods for measuring team productivity are the SPACE framework, DORA metrics, and Goals/Signals/Metrics (GSM). The SPACE Framework for Team Productivity In a recent research paper by Nicole Forsgren and her colleagues, “The SPACE of Developer Productivity" (2021.), the authors defined a framework as a systematic approach to measuring, understanding, and optimizing engineering productivity. It encourages leaders to take a comprehensive approach to productivity, communicating measurements with one another and connecting them to team objectives. The five aspects are used to categorize engineering productivity, called the Space Framework. S: Satisfaction and Well-Being Here, we measure whether our team members are fulfilled and happy usually by using some surveys. Why do we do this? Because satisfaction is correlated with productivity. Unhappy teams that are productive will burn out sooner rather than later. P: Performance This is also hard to quantify because producing more code in a unit of time is not a measure of high-quality code or productivity. Here, we can measure defect rates or change failure rates to measure it, which means the percentage of deployments causing a failure in production. Every loss of output will harm the productivity of a team. Also, if we count the number of merged PRs over time, it's correlated to production. A: Activity Activities are usually visible. Here, we can measure the number of commits per day or deployment frequency, i.e., how often we push new features to production. C: Collaboration and Communication We want extensive and effective collaboration between individuals and groups for a productive team. In addition, productive teams usually rely on high transparency and awareness of other people's work. Here, we can measure PR review time, quality of meetings, and knowledge sharing. E: Efficiency and Flow With flow, we measure individual efficiency to complete some work fast and without interruption, while efficiency means the same but on the team level. Our goal is to foster an environment where developers may experience and keep the flow for the longest possible period each day while also assisting them in feeling content with their routines. To implement the SPACE framework, the authors recommend aligning three areas with company goals and team priorities. First, when a team selects a measure, this reflects team values. Here, we want to start from team-level metrics, and when we succeed, we can roll it out to the broader organization. Example metrics (“The SPACE of Developer Productivity,” N. Forsgren et al., 2021) DORA Metrics for Team Productivity One other way to measure team productivity is DORA metrics. With these metrics, we are evaluating team performance based on the following: Lead time for changes is the time between a commit and production. Elite performers do this in less than one hour, while medium performers need one day to one week. Deployment frequency is how often we ship changes. Elite performers do this multiple times per day, while medium ones do it once a month to once every six months. The mean time to recovery is the average time it takes your team to restore service when there’s an outage. Elite performers do this in less than one hour, while medium ones do this in a day to one week. The change failure rate is the percentage of releases that result in downtime. Elite performers are 0-15%, while medium performers are 16-30%. The lead time for modifications and the deployment frequency reveal a team's velocity and how quickly they react to the constantly changing needs of consumers. The stability of service and how responsive the group is to service outages or failures are indicated by the mean time to recovery and change failure rate. Comparing all four essential criteria, one can assess how successfully their company balances speed and stability. Goals/Signals/Metrics (GSM) For Measuring Developer Productivity Yet, there are other productivity frameworks, too, such as “Goals/Signals/Metrics (GSM)” metrics from Google. In this framework, you first agree that there is a problem worth solving, then we set a goal on what we want to achieve and decide which statements, when actual, would note that we are making progress (signals). Finally, we arrive at metrics we want to measure but focus more on the desired outcome, not just the metric. For example, the goal could be “Make sure that engineers have more focus time,” signals could be “Engineers report fewer cases of meeting overload,” while metrics could be “Engineer focus time.” For metrics, you can build a team Dashboard that will collect them in one place, so it’s easy to analyze them. You can check out this video from Google if you'd like to learn more about this method.
Gene Kim
Author, Researcher, Speaker, Director, DevOps Enthusiast,
IT Revolution
David Brown
Founder and CEO,
Toro Cloud
Otavio Santana
Software Engineer and Architect and Open Source Committer,
OS Expert
Tanaka Mutakwa
VP of Engineering,
Names and Faces