The topic of security covers many different facets within the SDLC. From focusing on secure application design to designing systems to protect computers, data, and networks against potential attacks, it is clear that security should be top of mind for all developers. This Zone provides the latest information on application vulnerabilities, how to incorporate security earlier in your SDLC practices, data governance, and more.
What You Must Know About Rate Limiting
Navigating the Skies
As we stand at the precipice of an increasingly digitized world, the challenges of emerging cyber threats are becoming more complex. With 20 years of experience as a cybersecurity professional, I have seen this evolution firsthand. The need for skilled professionals who can navigate these complexities has never been more critical. In this article, I aim to highlight the essential skills required for future cybersecurity experts and how we can effectively cultivate such talent. 1. Comprehensive Understanding of Emerging Technologies Emerging technologies like artificial intelligence (AI), machine learning (ML), blockchain, Internet of Things (IoT), and quantum computing are drastically transforming not just businesses but also the field of cybersecurity. These technologies bring new opportunities and efficiencies and introduce novel vulnerabilities and attack vectors. Therefore, future cybersecurity professionals must understand these technologies inside out. They should be able to anticipate potential security risks associated with these technologies and devise effective countermeasures proactively. 2. Deep Proficiency in Cloud Security The shift toward cloud-based solutions is accelerating, making expertise in cloud security indispensable. Organizations adopting different cloud deployment models — public, private, hybrid, or multi-cloud — face unique security challenges. Professionals must understand these nuances, identify cloud-specific threats, and implement best practices to secure data and applications in the cloud environment. Furthermore, they should be familiar with tools and techniques for continuously monitoring and auditing cloud resources. 3. Mastery of Cybersecurity Frameworks and Standards Cybersecurity frameworks and standards provide structured and systematic approaches to managing cyber risks. Familiarity with these frameworks, such as the ISO 27001, NIST Cybersecurity Framework, CIS Controls, and GDPR, is crucial for implementing robust security measures and ensuring regulatory compliance. These frameworks guide organizations in identifying their most critical assets, assessing vulnerabilities, prioritizing risk mitigation efforts, and continuously monitoring and improving their security posture. 4. Ability to Perform Threat Hunting and Incident Response In an era where attacks are inevitable, the ability to proactively hunt for unknown threats and respond effectively to incidents is critical. This involves understanding attacker tactics, techniques, and procedures (TTPs), using advanced threat detection tools, conducting thorough incident analysis, and coordinating swift and efficient response actions. Professionals should be adept at forensic investigations to identify breach points, contain the impact, eradicate the threat, and recover systems to normal operations. 5. Skills in Secure Coding Practices Software applications have become prime targets for attackers, making secure coding skills highly valuable. Insecure coding can lead to vulnerabilities, such as SQL injection, cross-site scripting (XSS), and buffer overflow, which attackers exploit. Knowledge of secure coding practices across common programming languages is essential for developing resilient software. Additionally, professionals should understand how to conduct code reviews and use automated tools for vulnerability scanning in the development lifecycle. 6. Expertise in Identity and Access Management (IAM) Identity and access management form the cornerstone of cybersecurity. It's about ensuring that only authorized individuals have access to specific resources for valid reasons. Future professionals should master IAM concepts like single sign-on (SSO), multi-factor authentication (MFA), privileged access management (PAM), identity federation, and user behavior analytics. They must also stay abreast of emerging trends like biometric authentication and blockchain-based decentralized identities. 7. Soft Skills While technical prowess is significant, soft skills hold equal weight in cybersecurity. Problem-solving and critical thinking skills are vital for analyzing complex threats, devising strategies, and making informed decisions. Communication skills are crucial for explaining technical issues to non-technical stakeholders, influencing security policies, and promoting a culture of security awareness within the organization. Ethical decision-making is also essential, given the sensitive nature of information professionals handle. Nurturing Talent for the Future Cultivating future-ready talent requires concerted efforts across various fronts: Integrating Cybersecurity into Education: Introducing cybersecurity concepts at the school level can spark interest early on and encourage more students to consider this career path. Universities should offer specialized courses that cover theoretical and practical aspects of cybersecurity. Providing Continuous Training: The dynamic nature of cyber threats necessitates continuous learning. Regular training programs, workshops, webinars, and conferences can help professionals stay updated with the latest trends, technologies, and threat intelligence. Promoting Certifications: Industry-recognized certifications like CISSP, CISM, CEH, and CompTIA Security+ validate a professional's knowledge and skills, enhance their credibility, and open up better job opportunities. Organizations should support and incentivize their employees to pursue these certifications. Creating Mentorship Programs: Experienced professionals can play a significant role in guiding newcomers through mentorship programs. They can share valuable insights, lessons learned, and best practices from years of working in the field. The future of cybersecurity presents both challenges and opportunities. By nurturing talent equipped with the right mix of skills, we can ensure a robust defense against evolving cyber threats and contribute to building a safer digital world.
Amazon Web Services (AWS) provides extensive cloud computing services. These services equip businesses with the flexibility, scalability, and reliability necessary for their operations. Security becomes a paramount concern as organizations shift their activities to the cloud. The AWS Identity and Access Management (IAM) service protects your AWS resources. This piece explores the optimal methods of securing your AWS account by employing IAM. Best Practices To Secure AWS Account With IAM Establish Robust Password Protocols Securing your AWS account starts with the assurance that users establish robust passwords. IAM allows you to impose password complexity prerequisites, including minimum length, incorporation of special characters, and expiration timelines. Moreover, it activates multi-factor authentication (MFA) for all IAM users, which provides an additional security layer and diminishes the possibility of unauthorized entry even when passwords fall into the wrong hands. Enforce the Principle of Least Privilege The principle of least privilege proves crucial to the safety of your AWS account. Granting only necessary permissions to users and roles for their specific tasks minimizes inadvertent data exposure risks and restricts potential harm from breached credentials. Regular reviews and permissions adjustments are needed to match the user's existing duties. Consider Using IAM Roles for EC2 Instances Steer clear of long-term access keys for EC2 instances. Opt for IAM roles as they offer temporary security credentials for these instances. IAM roles permit access to other AWS services without revealing sensitive credentials on the instances themselves. This method boosts security and simplifies access management. Activate CloudTrail for Logging and Auditing AWS CloudTrail maintains a comprehensive history of API interactions within your AWS account. You can monitor resource alterations, identify unauthorized actions, and perform security evaluations by activating Trail. Preserving CloudTrail logs in a distinct AWS account or region assures that the logs stay unharmed even if the principal account suffers compromise. Consistent Security Evaluation and Auditing Make it routine to examine your IAM policies, roles, and user access. This helps identify and fill potential security gaps. Conduct security evaluations and audits to align with optimal practices and industry benchmarks. AWS IAM Access Analyzer aids in pinpointing unintentional access to resources while offering improvement suggestions. Ensure Cross-Account Access Security Collaboration with other AWS accounts or third-party entities calls for establishing secure cross-account access. Root account credentials should be avoided in favor of IAM roles to create trust between accounts. This method guarantees precise control over permissions and simplifies revoking access when required. Frequent Training and Education Imparting knowledge to your team regarding AWS IAM best practices and evolving security threats forms a cornerstone of maintaining a secure environment. Organize routine training sessions and workshops and offer educational resources to ensure everyone remains informed about the most recent security practices and advancements. Conclusion Maintaining the security of your AWS account using IAM best practices stands as a critical need in the current digital environment. Compliance with the principle of least privilege, activation of MFA, and the execution of routine security assessments can enhance your AWS account's security stance. Adherence to these best practices guarantees control over access to your AWS resources, diminishes the likelihood of unauthorized entry, and safeguards sensitive information against possible breaches.
It is also known as web app pen-testing or security testing, which is an organized evaluation of a web application’s security to identify exposure and debility that could be exploited by malicious performers. The main goal of penetration testing is to proactively assess the security posture of a web application and identify potential vulnerabilities before attackers can exploit them. During a web app penetration test, skilled security professionals, known as penetration testers or ethical hackers, simulate various attack scenarios to uncover security flaws that might lead to unauthorized access, data breaches, or other malicious activities. The process involves further points: Information Gathering: Penetration testers gather information about the target web application, such as its structure, technologies used, and possible entry points. Threat Modeling: They analyze the web application’s architecture and design to determine potential threat vectors and prioritize areas to test. Vulnerability Scanning: Automated tools may initially scan the web application to quickly identify common vulnerabilities. Manual Testing: Penetration testers manually explore the application, attempting to exploit various vulnerabilities, such as injection flaws (e.g., SQL injection, XSS), authentication issues, authorization problems, insecure direct object references, etc. Authentication and Session Management: The testers assess the strength of user authentication mechanisms and session management controls. Authorization Testing: They check if the application correctly enforces access controls and user privileges. Data Validation: Input fields and data handling are scrutinized to find data manipulation or injection attack opportunities. Error Handling and Information Leakage: Testers look for error messages that could potentially expose sensitive information. Security Misconfigurations: The web server, application server, and database configurations are reviewed for potential weaknesses. Business Logic Flaws: Testers examine the application’s logic to identify any flaws that may lead to unauthorized access or abuse of functionality. File and Directory Access: File upload and directory traversal vulnerabilities are assessed to prevent unauthorized access to sensitive files. Session Hijacking and Cross-Site Request Forgery (CSRF): Testers check for weaknesses that may lead to session hijacking or CSRF attacks. Report Generation: After the testing is complete, the penetration testers create a comprehensive report outlining the identified vulnerabilities, their potential impact, and recommended remediation measures. Types of Web App Penetration Testing Black Box Testing: In this approach, the penetration tester has no prior knowledge of the web application’s internal structure or codebase. The tester treats the application as a real attacker would, trying to gain access to sensitive information or exploit vulnerabilities without any insider knowledge. White Box Testing: In contrast to black box testing, white box testing allows the penetration tester to have full access to the application’s source code, architecture, and other details. This information helps the tester to perform a more in-depth analysis of the application’s security. Gray Box Testing: Gray box testing lies somewhere between black box and white box testing. The tester has partial knowledge of the application’s inner workings, such as access to some parts of the source code or system documentation. Manual Testing: Manual penetration testing involves human testers using various tools, techniques, and creativity to identify security vulnerabilities that automated tools might miss. Manual testing allows for a more comprehensive assessment and validation of potential issues. Automated Testing: Automated tools are used to scan the web application for known vulnerabilities and weaknesses. While automated testing is faster and can identify common issues, it may not catch all types of vulnerabilities, and human expertise is still necessary for a thorough evaluation. White Box Code Review: This type of testing involves a detailed review of the web application’s source code by security experts. They look for vulnerabilities, coding errors, and other security flaws that might not be apparent in other types of testing. Injection Testing: This type of testing focuses on identifying and preventing injection vulnerabilities, such as SQL injection, command injection, and LDAP injection, which allow attackers to insert malicious code into the application. Cross-Site Scripting (XSS) Testing: XSS testing aims to uncover vulnerabilities that enable attackers to inject malicious scripts into web pages viewed by other users, potentially compromising their accounts or stealing sensitive information. Cross-Site Request Forgery (CSRF) Testing: CSRF testing helps identify vulnerabilities that allow attackers to trick authenticated users into unknowingly executing actions on a web application without their consent. Security Misconfiguration Testing: This type of testing looks for misconfigured settings, default passwords, and other configuration issues that may lead to security breaches. Authentication and Authorization Testing: In this testing, the penetration tester evaluates the strength of the authentication mechanisms and checks if proper authorization checks are in place to prevent unauthorized access to sensitive areas of the application. Session Management Testing: This type of testing focuses on ensuring that session-related vulnerabilities are not present, preventing issues like session hijacking or fixation. File Upload and Download Testing: The tester examines the file upload/download functionality to ensure that it doesn’t allow malicious files to be uploaded or prevent unauthorized access to sensitive files. Business Logic Testing: Business logic testing evaluates the application’s core logic to ensure that it functions correctly and securely, preventing manipulation of the application’s intended workflow. Mobile App/Web Services Testing: In cases where web services or APIs interact with the web application, testing is performed to ensure their security and protection against attacks like API exploitation. Conclusion Web app penetration testing is an essential component of a comprehensive security strategy for any web application. It helps organizations identify and address security weaknesses, thereby reducing the risk of potential data breaches, financial losses, and damage to their reputation. Regularly conducting such tests, especially after significant updates or changes to the application, is crucial to maintaining a secure web environment. It’s important to note that web application penetration testing should be conducted by trained and experienced professionals, adhering to ethical guidelines and with the permission of the application owner to avoid any legal issues.
Fencing is a crucial technique used in distributed systems to protect shared resources and maintain system stability. It involves isolating problematic nodes or preventing them from accessing shared resources, ensuring data integrity and overall system reliability. In this article, we will explore the concept of fencing in detail, discuss its importance in distributed systems design, and examine a real-world example of how Twitter uses fencing to maintain its service availability and performance. Understanding Fencing in Distributed Systems Distributed systems consist of multiple nodes working together to achieve a common goal, such as serving web pages or processing large volumes of data. In such systems, nodes often need to access shared resources, like databases or file storage. However, when a node experiences issues like crashes or malfunctions, it can compromise the entire system, leading to data corruption or loss. Fencing helps mitigate these risks by isolating problematic nodes or protecting shared resources from concurrent access. There are two main types of fencing mechanisms: Node-level fencing: This type of fencing targets specific nodes experiencing issues and isolates them from the rest of the system. This can be achieved through various methods, such as disabling network access, blocking disk access, or even powering off the problematic node. Resource-level fencing: This type of fencing focuses on protecting the shared resources themselves, rather than isolating specific nodes. Resource-level fencing ensures that only one node can access a shared resource at a time, preventing conflicts and data corruption. This can be achieved using techniques such as locks, tokens, or quorum-based mechanisms. Real-World Example: Twitter and Fencing Twitter is a popular social media platform that relies on a distributed system architecture to handle millions of tweets, likes, and retweets every day. To ensure high availability, data consistency, and performance, Twitter employs fencing mechanisms to manage its distributed systems. One example of fencing at Twitter is in their use of Apache ZooKeeper, a distributed coordination service. ZooKeeper provides a robust and highly available service for managing distributed systems by providing features such as distributed locks, leader election, and configuration management. Twitter uses ZooKeeper to implement resource-level fencing, ensuring that only one node can access a shared resource at a time. When a node in Twitter's system needs to access a shared resource, it first acquires a lock from ZooKeeper. If the lock is granted, the node can access the resource, perform operations, and release the lock when finished. If another node tries to access the same resource while the lock is held, it will be denied, ensuring data consistency and preventing conflicts. Additionally, Twitter uses node-level fencing to isolate problematic nodes. For instance, if a node becomes unresponsive or starts generating errors, it can be fenced off from the rest of the system. This prevents the faulty node from causing further issues, allowing the rest of the system to continue operating normally. Conclusion Fencing is an essential aspect of distributed systems design, as it helps maintain system reliability, availability, and data consistency. By implementing appropriate fencing mechanisms, organizations like Twitter can minimize the impact of failures and ensure the smooth operation of their systems. As distributed systems continue to grow in complexity and scale, fencing techniques will remain a critical component in ensuring their stability and performance.
When starting a new web3 project, it’s important to make the right choices about the blockchain and smart contract language. These choices can significantly impact the overall success of your project as well as your success as a developer. In this article, we'll compare three popular smart contract programming languages: Solidity: used in Ethereum and other EVM-based platforms Cadence: used in the Flow blockchain Move: used in the Sui/Aptos blockchain We'll explain the concepts and characteristics of each language in simple terms, making it easier for beginners to understand. Let’s dive in. Solidity: The Foundation of Ethereum Smart Contracts Solidity is a high-level, object-oriented language used to build smart contracts in platforms like Ethereum. Initially, Solidity aimed to be user-friendly, attracting developers by resembling JavaScript and simplifying learning. While it still values user-friendliness, its focus has shifted to enhancing security. Some Solidity features include: Syntax and simplicity: Solidity uses clear, explicit code with a syntax similar to JavaScript, prioritizing ease of understanding for developers. Security focus: Solidity emphasizes secure coding practices and highlights risky constructs like gas usage. Statically typed: The language enforces data type declarations for variables and functions. Inheritance and libraries: Solidity supports features like inheritance, libraries, and complex user-defined types. Cadence: Empowering Digital Assets on Flow Cadence is designed by Flow, a blockchain known for crypto games and NFT projects. It ensures secure, clear, and approachable smart contract development. Some Cadence features include: Type safety: The language enforces strict type-checking to prevent common errors. Resource-oriented programming: Resources are unique types that can only move between accounts. They can’t be copied or discarded. If a function fails to store a Resource obtained from an account in the function scope during development, then semantic checks will show an error. The run-time enforces the same rules in terms of allowed operations. Contract functions that don’t handle Resources properly in scope before exiting will stop. These Resource features make them perfect for representing both fungible and non-fungible tokens. Ownership is tracked according to where they are stored, and the assets can’t be duplicated or lost since the language enforces correctness. Capability-based security: Using Capabilities allows you to let others access your stored items remotely. If one person wants to access another person's stored items, the first person needs permission, called a Capability. There are two types of Capabilities: public and private. If someone wants to allow everyone to access their items, they can share a public Capability. For example, an account can use a public Capability to accept tokens from anyone. On the other hand, someone can give private Capabilities to certain people, allowing them to access only certain features. For example, in a project with unique digital items, the project owner might grant an "administrator Capability" that lets those users create new items. Image from Guide for Solidity developers | Flow Developer Portal Built-in pre- and post-conditions: Functions have predefined conditions for safer execution. Optimized for digital assets: Cadence's focus on resource-oriented programming makes it ideal for managing digital assets like NFTs. Freedom from msg.sender: To grasp the importance of these new ideas, let's take a quick look at some history. In 2018, the Dapper Labs team began working on Cadence as a new programming language. They faced challenges with Solidity because of its limitations. The main frustration in building decentralized apps came from the way contracts were accessed using addresses, making it difficult to combine contracts. Composability in Web3 Now, imagine contracts as Lego building blocks. Composability in Web3 means one contract can be used as a foundation for others, adding their features together. For instance, if a contract records game results on a blockchain, another contract can be built to show the best players. Another one could go even further and use past game results to predict future game odds for betting. But here's the catch: Because of how Solidity works, contracts can only talk to one another if the first contract has permission to access the second one, even if users can access both. In Solidity, who can do what is controlled by protected functions in contracts? This means contracts know and check who is trying to access their protected areas. Image from Guide for Solidity developers | Flow Developer Portal Cadence changes how access works. Instead of using the old way where contracts need permission, it uses something called "Capabilities." When you have a Capability, you can use it to get to a protected item such as a function or resource. This means the contract no longer has to define who's allowed access. You can only get to the protected item if you have a Capability, which you can use with borrow(). So, the old msg.sender way isn't needed anymore! Image from Guide for Solidity developers | Flow Developer Portal The effects of composability are important. When contracts don't need to know beforehand who they're interacting with, users can easily interact with multiple contracts and their functions during a transaction if they have the right permissions (Capabilities). This also allows contracts to interact with one another directly without needing special permissions or preparations. The only condition is that the calling contract must have the required Capabilities. Move: Safeguarding Digital Assets on Sui/Aptos Move, used in the Sui/Aptos blockchain, addresses challenges posed by established languages like Solidity. It ensures scarcity and access control for digital assets. Move's features include: Preventing double-spending: Move prevents the creation or use of assets more than once, ensuring robust blockchain applications. Ownership and rights control: Developers have precise control over ownership and associated rights. Module structure: In Move, a smart contract is called a module, emphasizing modularity and organization. Bytecode verifier: Move employs static analysis to reject invalid bytecode, enhancing security. Standard library: Move includes a standard library for common transaction scripts. Creating Smart Contracts Let's illustrate the differences by comparing a simple smart contract that increments a value in Cadence, Solidity, and Move. Solidity Example In Solidity, creating a contract that increments a value involves defining a contract, specifying the count variable, and creating functions to manipulate it. It uses explicit syntax for variable visibility and function declarations. Plain Text // SPDC-License-Identifier: MIT pragma solidity ^0.8.17; contract Counter { uint public count; // function to get the current count function get() public view returns (uint) { return count; } // function to increment count by 1 function inc() public { count += 1; } // function to decrement count by 1 function dec() public { count -=1; } } Cadence Example Cadence's approach to incrementing a value is similar but emphasizes clarity. It utilizes a resource-oriented structure and straightforward syntax, making it easier for developers to create and manage digital assets. Plain Text pub contract Counter { pub var count: Int // function to increment count by 1 pub fun increment() { self.count = self.count +1 } // function to decrement count by 1 pub fun decrement() { self.count = self.count – 1 } pub fun get(): Int { return self.count } init() { self.count = 0 } } Solidity Versus Cadence Syntax Differences In Solidity, the visibility keyword comes before or after variable/function names, whereas Cadence consistently follows the visibility-type-name sequence. Scalability and Upgradability Flow's network boasts higher transaction throughput than Ethereum, making Cadence more scalable. Additionally, Flow's support for contract updates enhances development. Move Example Move introduces new concepts like modules, resources, and ownership control. A Move module creates an Incrementer resource, requiring the owner's signature for operations. Plain Text module incrementer_addr::increment { use aptos_framework::account; use std::signer; struct Incrementer has key { count: u64, } public entry fun increment(account: &signer) acquires Incrementer { let signer_address = signer::address_of(account); let c_ref = &mut borrow_global_mut<Incrementer>(signer_address).count; *c_ref = *c_ref +1 } public entry fun create_incrementer(account: &signer){ let incrementer = Incrementer { count: 0 }; move_to(account, incrementer); } } Composite Types and Turing Completeness All three languages support composite types, allowing complex types from simpler ones. All are Turing complete, meaning they can handle any computation given enough resources. Resource-Oriented Versus Object-Oriented While Solidity and Move require compilation, Cadence is interpreted. Cadence and Move employ a resource-oriented approach, securing digital ownership in one location. Conclusion: Choosing the Right Programming Language When selecting a programming language like Solidity, Cadence, or Move, consider the needs of your project. Solidity: Solidity is commonly used, but it might not be very easy to work with, and there could be security problems. Cadence: Mainly used for digital assets, Cadence is a newer language that focuses on security, is easy to understand, and provides developers with a superior experience. Move: Move is based on Rust, a more complex language. Rust can be more difficult to learn and understand. Move is a new language, and it doesn't have many tools, resources, or a big community. Languages that are interpreted, like Move, are usually slower than languages that are turned into computer code before running. At the time of this writing, no major blockchain uses Move, so it can't work with multiple blockchains. Ultimately, your choice will impact your project's success, so make an informed decision and enjoy your journey as a web3 developer!
Microservices architecture has become extremely popular in recent years because it allows for the creation of complex applications as a collection of discrete, independent services. Comprehensive testing, however, is essential to guarantee the reliability and scalability of the software due to the microservices’ increased complexity and distributed nature. Due to its capacity to improve scalability, flexibility, and resilience in complex software systems, microservices architecture has experienced a significant increase in popularity in recent years. The distributed nature of microservices, however, presents special difficulties for testing and quality control. In this thorough guide, we’ll delve into the world of microservices testing and examine its significance, methodologies, and best practices to guarantee the smooth operation of these interconnected parts. Understanding Microservices The functionality of an application is provided by a collection of independent, loosely coupled microservices. Each microservice runs independently, has a database, and uses its business logic. This architecture supports continuous delivery, scalability, and flexibility. In order to build a strong foundation, we must first understand the fundamentals of microservices architecture. Microservices are teeny, independent services that join forces to create a full software program. Each service carries out a particular business function and communicates with other services using clear APIs. Organizations can more effectively develop, deploy, and scale applications using this modular approach. However, with the increase in services, thorough testing is essential to find and fix any potential problems. Challenges in Microservices Testing Testing microservices introduces several unique challenges, including: Distributed nature: Microservices are distributed across different servers, networks, and even geographical locations. This requires testing to account for network latency, service discovery, and inter-service communication. Dependency management: Microservices often rely on external dependencies such as databases, third-party APIs, and message queues. Testing must consider these dependencies and ensure their availability during testing. Data consistency: Maintaining data consistency across multiple microservices is a critical challenge. Changes made in one service should not negatively impact the functionality of other services. Deployment complexity: Microservices are typically deployed independently, and coordinating testing across multiple services can be challenging. Versioning, rollbacks, and compatibility testing become vital considerations. Integration testing: Microservices architecture demands extensive integration testing to ensure seamless communication and proper behavior among services. Importance of Microservices Testing Microservices testing plays a vital role in guaranteeing the overall quality, reliability, and performance of the system. The following points highlight its significance: Isolation and Independence: Testing each microservice individually ensures that any issues or bugs within a specific service can be isolated, minimizing the impact on other services. Continuous Integration and Delivery (CI/CD): Microservices heavily rely on CI/CD pipelines to enable frequent deployments. Effective testing enables faster feedback loops, ensuring that changes and updates can be delivered reliably without causing disruptions. Fault Isolation and Resilience: By testing the interactions between microservices, organizations can identify potential points of failure and design resilient strategies to handle failures gracefully. Scalability and Performance: Testing enables organizations to simulate high loads and stress scenarios to identify bottlenecks, optimize performance, and ensure that microservices can scale seamlessly. Types of Microservices Testing Microservices testing involves various types of testing to ensure the quality, functionality, and performance of individual microservices and the system as a whole. Here are some important types of testing commonly performed in microservices architecture: Unit Testing Unit testing focuses on testing individual microservices in isolation. It verifies the functionality of each microservice at a granular level, typically at the code level. Unit tests ensure that individual components or modules of microservices behave as expected and meet the defined requirements. Mocking frameworks are often used to isolate dependencies and simulate interactions for effective unit testing. Integration Testing Integration testing verifies the interaction and integration between multiple microservices. It ensures that microservices can communicate correctly and exchange data according to the defined contracts or APIs. Integration tests validate the interoperability and compatibility of microservices, identifying any issues related to data consistency, message passing, or service coordination. Contract Testing Contract testing validates the contracts or APIs exposed by microservices. It focuses on ensuring that the contracts between services are compatible and adhere to the agreed-upon specifications. Contract testing verifies the request and response formats, data structures, and behavior of the services involved. This type of testing is essential for maintaining the integrity and compatibility of microservices during development and evolution. End-to-End Testing End-to-end (E2E) testing evaluates the functionality and behavior of the entire system, including multiple interconnected microservices, databases, and external dependencies. It tests the complete flow of a user request through various microservices and validates the expected outcomes. E2E tests help identify issues related to data consistency, communication, error handling, and overall system behavior. Performance Testing Performance testing assesses the performance and scalability of microservices. It involves testing the system under different loads, stress conditions, or peak usage scenarios. Performance tests measure response times, throughput, resource utilization, and other performance metrics to identify bottlenecks, optimize performance, and ensure that the microservices can handle expected loads without degradation. Security Testing Security testing is crucial in microservices architecture due to the distributed nature and potential exposure of sensitive data. It involves assessing the security of microservices against various vulnerabilities, attacks, and unauthorized access. Security testing encompasses techniques such as penetration testing, vulnerability scanning, authentication, authorization, and data protection measures. Chaos Engineering Chaos engineering is a proactive testing approach where deliberate failures or disturbances are injected into the system to evaluate its resilience and fault tolerance. By simulating failures or stress scenarios, chaos engineering validates the system’s ability to handle failures, recover gracefully, and maintain overall stability. It helps identify weaknesses and ensures that microservices can handle unexpected conditions without causing a system-wide outage. Data Testing Data testing focuses on validating the accuracy, integrity, and consistency of data stored and processed by microservices. It involves verifying data transformations, data flows, data quality, and data integration between microservices and external systems. Data testing ensures that data is correctly processed, stored, and retrieved, minimizing the risk of data corruption or inconsistency. These are some of the key types of testing performed in microservices architecture. The selection and combination of testing types depend on the specific requirements, complexity, and characteristics of the microservices system being tested. A comprehensive testing strategy covering these types of testing helps ensure the reliability, functionality, and performance of microservices-based applications. Best Practices for Microservices Testing Microservices testing presents unique challenges due to the distributed nature of the architecture. To ensure comprehensive testing and maintain the quality and reliability of microservices, it’s essential to follow best practices. Here are some key best practices for microservices testing: Test at Different Levels Microservices testing should be performed at multiple levels, including unit testing, integration testing, contract testing, end-to-end testing, performance testing, and security testing. Each level of testing verifies specific aspects of the microservices and their interactions. Comprehensive testing at various levels helps uncover issues early and ensures the overall functionality and integrity of the system. Prioritize Test Isolation Microservices are designed to be independent and loosely coupled. It’s crucial to test each microservice in isolation to identify and resolve issues specific to that service without impacting other services. Isolating tests ensures that failures or changes in one microservice do not cascade to other parts of the system, enhancing fault tolerance and maintainability. Use Mocking and Service Virtualization Microservices often depend on external services or APIs. Mocking and service virtualization techniques allow for testing microservices independently of their dependencies. By replacing dependencies with mocks or virtualized versions of the services, you can control the behavior and responses during testing, making it easier to simulate different scenarios, ensure test repeatability, and avoid testing delays caused by external service availability. Implement Contract Testing Microservices rely on well-defined contracts or APIs for communication. Contract testing verifies the compatibility and compliance of these contracts between services. By testing contracts, you ensure that services can communicate effectively, preventing integration issues and reducing the risk of breaking changes. Contract testing tools like Pact or Spring Cloud Contract can assist in defining and validating contracts. Automate Testing Automation is crucial for effective microservices testing. Implementing a robust test automation framework and CI/CD pipeline allows for frequent and efficient testing throughout the development lifecycle. Automated testing enables faster feedback, reduces human error, and facilitates the continuous delivery of microservices. Tools like Cucumber, Postman, or JUnit can be leveraged for automated testing at different levels. Emphasize Performance Testing Scalability and performance are vital aspects of microservices architecture. Conduct performance testing to ensure that microservices can handle expected loads and perform optimally under various conditions. Load testing, stress testing, and performance profiling tools like Gatling, Apache JMeter, or Locust can help assess the system’s behavior, identify bottlenecks, and optimize performance. Implement Chaos Engineering Chaos engineering is a proactive testing methodology that involves intentionally injecting failures or disturbances into a microservices environment to evaluate its resilience. By simulating failures and stress scenarios, you can identify weaknesses, validate fault tolerance mechanisms, and improve the overall robustness and reliability of the system. Tools like Chaos Monkey, Gremlin, or Pumba can be employed for chaos engineering experiments. Include Security Testing Microservices often interact with sensitive data and external systems, making security testing crucial. Perform security testing to identify vulnerabilities, ensure data protection, and prevent unauthorized access. Techniques such as penetration testing, vulnerability scanning, and adherence to security best practices should be incorporated into the testing process to mitigate security risks effectively. Monitor and Analyze System Behavior Monitoring and observability are essential during microservices testing. Implement monitoring tools and techniques to gain insights into the behavior, performance, and health of microservices. Collect and analyze metrics, logs, and distributed traces to identify issues, debug problems, and optimize the system’s performance. Tools like Prometheus, Grafana, ELK stack, or distributed tracing systems aid in monitoring and analyzing microservices. Test Data Management Managing test data in microservices testing can be complex. Ensure proper test data management by using techniques like data virtualization or synthetic data generation. These approaches allow for realistic and consistent test scenarios, minimizing dependencies on production data and external systems. By following these best practices, organizations can establish a robust testing process for microservices, ensuring quality, reliability, and performance in distributed systems. Adapting these practices to specific project requirements, technologies, and organizational needs is important to achieve optimal results. Test Environment and Infrastructure Creating an effective test environment and infrastructure is crucial for successful microservices testing. A well-designed test environment ensures that the testing process is reliable and efficient and replicates the production environment as closely as possible. Here are some key considerations for setting up a robust microservices test environment and infrastructure: Containerization and Orchestration Containerization platforms like Docker and orchestration tools such as Kubernetes provide a flexible and scalable infrastructure for deploying and managing microservices. By containerizing microservices, you can encapsulate each service and its dependencies, ensuring consistent environments across testing and production. Container orchestration tools enable efficient deployment, scaling, and management of microservices, making it easier to replicate the production environment for testing purposes. Environment Configuration Management Maintaining consistent configurations across different testing environments is crucial. Configuration management tools like Ansible, Chef, or Puppet help automate the setup and configuration of test environments. They allow you to define and manage environment-specific configurations, such as database connections, service endpoints, and third-party integrations, ensuring consistency and reproducibility in testing. Test Data Management Microservices often interact with databases and external systems, making test data management complex. Proper test data management ensures that test scenarios are realistic and cover different data scenarios. Techniques such as data virtualization, where virtual test data is generated on the fly, or synthetic data generation, where realistic but non-sensitive data is created, can be employed. Additionally, tools like Flyway or Liquibase help manage database schema migrations during testing. Service Virtualization Service virtualization allows you to simulate or virtualize the behavior of dependent microservices that are not fully developed or available during testing. It helps decouple testing from external service dependencies, enabling continuous testing even when certain services are unavailable or undergoing changes. Tools like WireMock, Mountebank, or Hoverfly provide capabilities for creating virtualized versions of dependent services, allowing you to define custom responses and simulate various scenarios. Continuous Integration and Delivery (CI/CD) Pipeline A robust CI/CD pipeline is essential for continuous testing and seamless delivery of microservices. The CI/CD pipeline automates the build, testing, and deployment processes, ensuring that changes to microservices are thoroughly tested before being promoted to higher environments. Tools like Jenkins, GitLab CI/CD, or CircleCI enable the automation of test execution, test result reporting, and integration with version control systems and artifact repositories. Test Environment Provisioning Automated provisioning of test environments helps in reducing manual effort and ensures consistency across environments. Infrastructure-as-Code (IaC) tools like Terraform or AWS CloudFormation enable the provisioning and management of infrastructure resources, including virtual machines, containers, networking, and storage, in a programmatic and reproducible manner. This allows for quick and reliable setup of test environments with the desired configurations. Monitoring and Log Aggregation Monitoring and log aggregation are essential for gaining insights into the behavior and health of microservices during testing. Tools like Prometheus, Grafana, or ELK (Elasticsearch, Logstash, Kibana) stack can be used for collecting and analyzing metrics, logs, and traces. Monitoring helps identify performance bottlenecks, errors, and abnormal behavior, allowing you to optimize and debug microservices effectively. Test Environment Isolation Isolating test environments from production environments is crucial to prevent any unintended impact on the live system. Test environments should have separate infrastructure, networking, and data resources to ensure the integrity of production data. Techniques like containerization, virtualization, or cloud-based environments provide effective isolation and sandboxing of test environments. Scalability and Performance Testing Infrastructure Microservices architecture emphasizes scalability and performance. To validate these aspects, it is essential to have a dedicated infrastructure for load testing and performance testing. This infrastructure should include tools like Gatling, Apache JMeter, or Locust, which allow simulating high loads, measuring response times, and analyzing system behavior under stress conditions. By focusing on these considerations, organizations can establish a robust microservices test environment and infrastructure that closely mirrors the production environment. This ensures accurate testing, faster feedback cycles, and reliable software delivery while minimizing risks and ensuring the overall quality and reliability of microservices-based applications. Test Automation Tools and Frameworks Microservices testing can be significantly enhanced by utilizing various test automation tools and frameworks. These tools help streamline the testing process, improve efficiency, and ensure comprehensive test coverage. In this section, we will explore some popular microservices test automation tools and frameworks: Cucumber Cucumber is a widely used tool for behavior-driven development (BDD) testing. It enables collaboration between stakeholders, developers, and testers by using a plain-text format for test scenarios. With Cucumber, test scenarios are written in a Given-When-Then format, making it easier to understand and maintain test cases. It supports multiple programming languages and integrates well with other testing frameworks and tools. Postman Postman is a powerful API testing tool that allows developers and testers to create and automate tests for microservices APIs. It provides a user-friendly interface for sending HTTP requests, validating responses, and performing functional testing. Postman supports scripting and offers features like test assertions, test data management, and integration with CI/CD pipelines. Rest-Assured Rest-Assured is a Java-based testing framework specifically designed for testing RESTful APIs. It provides a rich set of methods and utilities to simplify API testing, including support for request and response specification, authentication, data validation, and response parsing. Rest-Assured integrates well with popular Java testing frameworks like JUnit and TestNG. WireMock WireMock is a flexible and easy-to-use tool for creating HTTP-based mock services. It allows you to simulate the behavior of external dependencies or unavailable services during testing. WireMock enables developers and testers to stub out dependencies, define custom responses, and verify requests made to the mock server. It supports features like request matching, response templating, and record/playback of requests. Pact Pact is a contract testing framework that focuses on ensuring compatibility and contract compliance between microservices. It enables teams to define and verify contracts, which are a set of expectations for the interactions between services. Pact supports various programming languages and allows for generating consumer-driven contracts that can be used for testing both the provider and consumer sides of microservices. Karate Karate is an open-source API testing framework that combines API testing, test data preparation, and assertions in a single tool. It uses a simple and expressive syntax for writing tests and supports features like request chaining, dynamic payloads, and parallel test execution. Karate also provides capabilities for testing microservices built on other protocols like SOAP and GraphQL. Gatling Gatling is a popular open-source tool for load and performance testing. It allows you to simulate high user loads, measure response times, and analyze system behavior under stress conditions. Gatling provides a domain-specific language (DSL) for creating test scenarios and supports distributed load generation for scalability. It integrates well with CI/CD pipelines and offers detailed performance reports. Selenium Selenium is a widely used web application testing framework that can also be leveraged for testing microservices with web interfaces. It provides a range of tools and APIs for automating browser interactions and performing UI-based tests. Selenium supports various programming languages and offers capabilities for cross-browser testing, test parallelization, and integration with test frameworks like TestNG and JUnit. These are just a few examples of the many tools and frameworks available for microservices test automation. The choice of tool depends on factors such as project requirements, programming languages, team expertise, and integration capabilities with the existing toolchain. It’s essential to evaluate the features, community support, and documentation of each tool to select the most suitable one for your specific testing needs. Monitoring and Observability Monitoring and observability are essential for gaining insights into the health, performance, and behavior of microservices. Key monitoring aspects include: Log Aggregation and Analysis: Collecting and analyzing log data from microservices helps in identifying errors, diagnosing issues, and understanding the system’s behavior. Metrics and Tracing: Collecting and analyzing performance metrics and distributed traces provides visibility into the end-to-end flow of requests and highlights bottlenecks or performance degradation. Alerting and Incident Management: Establishing effective alerting mechanisms enables organizations to proactively respond to issues and incidents. Integrated incident management workflows ensure timely resolution and minimize disruptions. Distributed Tracing: Distributed tracing techniques allow for tracking and visualizing requests as they traverse multiple microservices, providing insights into latency, dependencies, and potential bottlenecks. Conclusion The performance, scalability, and reliability of complex distributed systems depend on the reliability of microservices. Organizations can lessen the difficulties brought about by microservices architecture by adopting a thorough testing strategy that includes unit testing, integration testing, contract testing, performance testing, security testing, chaos testing, and end-to-end testing. The overall quality and resilience of microservices-based applications are improved by incorporating best practices like test automation, containerization, CI/CD, service virtualization, scalability testing, and efficient monitoring, which results in better user experiences and successful deployments. The performance, dependability, and quality of distributed software systems are all dependent on the results of microservices testing. Organizations can find and fix problems at different levels, from specific microservices to end-to-end scenarios, by implementing a thorough testing strategy. Teams can successfully validate microservices throughout their lifecycle with the right test environment, infrastructure, and monitoring tools, facilitating quicker and more dependable software delivery. In today’s fast-paced technological environment, adopting best practices and using the appropriate testing tools and frameworks will enable organizations to create robust, scalable, and resilient microservices architectures, ultimately improving customer satisfaction and business success.
Let's start with a story: Have you heard the news about CircleCI's breach? No, not the one where they accidentally leaked some customer credentials a few years back. This time, it's a bit more serious. It seems that some unauthorized individuals were able to gain access to CircleCI's systems, compromising the secrets stored in CircleCI. CircleCI advised users to rotate "any and all secrets" stored in CircleCI, including those stored in project environment variables or contexts. The CircleCI breach serves as a stark reminder of the risks associated with storing sensitive information in CI/CD systems. Next, let's talk about CI/CD security a bit more. CI/CD Security CI/CD systems, like CircleCI, are platforms used by developers to automate build/deploy processes, which, by definition, means that they need to access other systems to deploy software or use some services, like cloud services. For example, after building some artifacts, you probably need to push those artifacts to some repositories; for another, when deploying your cloud infrastructure using code, you need to access public cloud providers to create stuff. As we can imagine, this means that a lot of sensitive information gets passed through the CI/CD platforms daily, because for CI/CD to interact with other systems, some type of authentication and authorization is required, and in most cases, passwords are used for this. So, needless to say, the security of the CI/CD systems themselves is critical. Unfortunately, although CI/CD systems are designed to automate software development processes, they might not necessarily be built with security in mind and they are not 100% secure (well, nothing is). Best Practices to Secure CI/CD Systems Best Practice #1: No Long-Lived Credentials One of the best practices, of course, is not to use long-lived credentials at all. For example, when you access AWS, always use temporary security credentials (IAM roles) instead of long-term access keys. Now, when you try to create an access key, AWS even reminds you to not do this, but recommends SSO/other methods. In fact, in many scenarios, you don't need long-term access keys that never expire; instead, you can create IAM roles and generate temporary security credentials. Temporary security credentials consist of an access key ID and a secret access key, but they also include a security token that indicates when the credentials expire. Best Practice #2: Don't Store Secrets in CI/CD Systems By storing secrets in CI systems, we are essentially placing our trust in a third-party service to keep sensitive information safe. However, if that service is ever compromised, as was the case with CircleCI, then all of the secrets stored within it are suddenly at risk, which can result in serious consequences. What we can do is to use some secrets manager to store secrets, and use a secure way in our CI/CD systems to retrieve those secrets. If you are not familiar with data security or secrets managers, maybe give this blog a quick read. Best Practice #3: Rotate/Refresh Your Passwords Not all systems you are trying to access from your CI/CD systems support some kind of short-lived credentials like AWS does. There are certain cases where you would have to use long-lived passwords, and in those cases, you need to make sure you rotate and refresh the token as it periodically expires. Certain secret managers even can rotate secrets for you, reducing operational overhead. For example, HashiCorp's Vault supports multiple "engines" (components that store, generate, or encrypt data), and most of the engines for Databases support root password rotation, where Vault manages the rotation automatically for you: If you are interested in more best practices, there is a blog on how to secure your CI/CD pipeline. How OIDC (OpenID Connect) Works Following these best practices, let's dive deep into two hands-on tutorials to harden your CI/CD security. Before that, let's do a very short introduction to the technology that enables us to do so: OpenID Connect (OIDC). If you are not bothered to read the official definition of OIDC from the official website, here's the TL;DR version: OIDC allows us to use short-lived tokens instead of long-lived passwords, following our best practice #1 mentioned earlier. If integrated with CI, we can configure our CI to request short-lived access tokens and use that to access other systems (of course, other systems need to support OIDC on their end). Tutorial: GitHub Actions OIDC With AWS To use OIDC in GitHub Actions workflows, first, we need to configure AWS. 1. Create an OIDC Provider in AWS For Configure provider, choose OpenID Connect. For the provider URL: Use https://token.actions.githubusercontent.com Choose "Get thumbprint" to verify the server certificate of your IdP. For the "Audience": Use sts.amazonaws.com. After creation, copy the provider ARN, which will be used next. To learn more about this step, see the official document here. 2. Create a Role With Assume Role Policy Next, let's configure the role and trust in IAM. Here, I created a role named "gha-oidc-role" and attached the AWS-managed policy "AmazonS3ReadOnlyAccess" (ARN: arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess). Then, the tricky part is the trust relationships, and here's an example of the value I used: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::737236345234:oidc-provider/token.actions.githubusercontent.com" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "token.actions.githubusercontent.com:aud": "sts.amazonaws.com" }, "StringLike": { "token.actions.githubusercontent.com:sub": "repo:IronCore864/vault-oidc-test:*" } } } ] } The Principal is the OIDC provider's ARN we copied from the previous step. The token.actions.githubusercontent.com:sub in the condition defines which org/repo can assume this role; here I used IronCore864/vault-oidc-test. After creation, copy the IAM role ARN, which will be used next. To learn more about creating roles for OIDC, see the official document here. 3. Test AWS Access in GitHub Action Using OIDC Let's create a simple test workflow: name: AWS on: workflow_dispatch: jobs: s3: runs-on: ubuntu-latest permissions: id-token: write contents: read steps: - name: configure aws credentials uses: aws-actions/configure-aws-credentials@v2 with: role-to-assume: arn:aws:iam::737236345234:role/gha-oidc-role role-session-name: samplerolesession aws-region: us-west-1 - name: ls run: | aws s3 ls This workflow named "AWS" is triggered manually, tries to assume the role we created in the previous step, and runs some simple AWS commands to test we get the access. The job or workflow run requires a permission setting with id-token: write. You won't be able to request the OIDC JWT ID token if the permissions setting for id-token is set to read or none. For your convenience, I put the workflow YAML file here. After triggering the workflow, everything works with no access keys or secrets needed whatsoever: Tutorial: GitHub Actions OIDC With HashiCorp Vault Unfortunately, not all systems that you are trying to access from your CI/CD workflows support OIDC, and sometimes you would still need to use passwords. However, using hardcoded passwords means we need to duplicate and store them in GitHub as secrets, and this violates our aforementioned best practice. A better approach is to use a secrets manager to store secrets and set up OIDC between your CI and your secrets manager to retrieve secrets from your secrets manager, with no password used in the process. 1. Install HashiCorp Vault In this tutorial, we will do a local dev server (DO NOT DO THIS IN PRODUCTION) and expose it to the public internet so that GitHub Actions can reach it. The quickest way to install Vault on Mac probably is using brew. First, install the HashiCorp tap, a repository of all our Homebrew packages: brew tap hashicorp/tap. Then, install Vault: brew install hashicorp/tap/vault. For other systems, refer to the official doc here. After installation, we can quickly start a local dev server by running: vault server -dev However, this is only running locally on our laptop, not accessible from the public internet. To expose it to the internet so that GitHub Actions can reach it, we use grok, a fast way to put your app on the internet. For detailed installation and usage, see the official doc. After installation, we can simply run ngrok http 8200 to expose the Vault port. Take note of the public URL to your local Vault. 2. Enable JWT Auth Execute the following to enable JWT auth in Vault: vault auth enable jwt Apply the configuration for GitHub Actions: vault write auth/jwt/config \ bound_issuer="https://token.actions.githubusercontent.com" \ oidc_discovery_url="https://token.actions.githubusercontent.com" Create a policy that grants access to the specified paths: vault policy write myproject-production - <<EOF path "secret/*" { capabilities = [ "read" ] } EOF Create a role to use the policy: vault write auth/jwt/role/myproject-production -<<EOF { "role_type": "jwt", "user_claim": "repository", "bound_claims_type": "glob", "bound_claims": {"sub": "repo:IronCore864/*"}, "policies": ["myproject-production"] } EOF When creating the role, ensure that the bound_claims parameter is defined for your security requirements, and has at least one condition. To check arbitrary claims in the received JWT payload, the bound_claims parameter contains a set of claims and their required values. In the above example, the role will accept any incoming authentication requests from any repo owned by the user (or org) IronCore864. To see all the available claims supported by GitHub's OIDC provider, see "About security hardening with OpenID Connect". 3. Create a Secret in Vault Next, let's create a secret in Vault for testing purposes, and we will try to use GitHub Actions to retrieve this secret using OIDC. Here we created a secret named "aws" under "secret", and there is a key named "accessKey" in the secret with some random testing value. To verify, we can run: $ vault kv get secret/aws = Secret Path = secret/data/aws ======= Metadata ======= Key Value --- ----- created_time 2023-07-29T00:00:38.757487Z custom_metadata <nil> deletion_time n/a destroyed false version 1 ====== Data ====== Key Value --- ----- accessKey test Note that the "Secret Path" is actually secret/data/aws, rather than secret/aws. This is because of the kv engine v2, the API path has the added "data" part. 4. Retrieve Secret from Vault in GitHub Actions Using OIDC Let's create another simple test workflow: name: Vault on: workflow_dispatch: jobs: retrieve-secret: runs-on: ubuntu-latest permissions: id-token: write contents: read steps: - name: Import secret from Vault id: import-secrets uses: hashicorp/vault-action@v2 with: method: jwt url: https://f2f6-185-212-61-32.ngrok-free.app role: myproject-production secrets: | secret/data/aws accessKey | AWS_ACCESS_KEY_ID; - name: Use secret from Vault run: | echo "${{ env.AWS_ACCESS_KEY_ID }" echo "${{ steps.import-secrets.outputs.AWS_ACCESS_KEY_ID }" This workflow named "Vault" is triggered manually, tries to assume the role we created in the previous steps, and receives the secret we just created. To use the secret, we can either use "env" or step outputs, as shown in the example above. Similarly to the previous AWS job, it requires a permission setting with id-token: write. For your convenience, I put the workflow YAML file here. After triggering the workflow, everything works with no secrets used to access our Vault: Summary In this article, we started with the infamous CircleCI breach, went on to talk about security in CI/CD systems with some best practices, did a quick introduction to OIDC, and did two hands-on tutorials on how to use it with your CI. After this tutorial, you should be able to configure secure access between GitHub Actions and your cloud providers and retrieve secrets securely using OIDC. See you in the next one!
When choosing a user authentication method for your application, you usually have several options: develop your own system for identification, authentication, and authorization, or use a ready-made solution. A ready-made solution means that the user already has an account on an external system such as Google, Facebook, or GitHub, and you use the appropriate mechanism, most likely OAuth, to provide limited access to the user’s protected resources without transferring the username and password to it. The second option with OAuth is easier to implement, but there is a risk for your user if the user's account is blocked and the user will lose access to your site. Also, if I, as a user, want to enter a site that I do not trust, I have to provide my personal information, such as my email and full name, sacrificing my anonymity. In this article, we’ll build an alternative login method for Spring using the MetaMask browser extension. MetaMask is a cryptocurrency wallet used to manage Ethereum assets and interact with the Ethereum blockchain. Unlike the OAuth provider, only the necessary set of data can be stored on the Ethereum network. We must take care not to store secret information in the public data, but since any wallet on the Ethereum network is in fact a cryptographic strong key pair, in which the public key determines the wallet address and the private key is never transmitted over the network and is known only by the owner, we can use asymmetric encryption to authenticate users. Authentication Flow Connect to MetaMask and receive the user’s address. Obtain a one-time code (nonce) for a user address. Sign a message containing nonce with a private key using MetaMask. Authenticate the user by validating the user's signature on the back end. Generate a new nonce to prevent your signature from being compromised. Step 1: Project Setup To quickly build a project, we can use Spring Initializr. Let’s add the following dependencies: Spring Web Spring Security Thymeleaf Lombok Download the generated project and open it with a convenient IDE. In the pom.xml, we add the following dependency to verify the Ethereum signature: XML <dependency> <groupId>org.web3j</groupId> <artifactId>core</artifactId> <version>4.10.2</version> </dependency> Step 2: User Model Let’s create a simple User model containing the following fields: address and nonce. The nonce, or one-time code, is a random number we will use for authentication to ensure the uniqueness of each signed message. Java public class User { private final String address; private Integer nonce; public User(String address) { this.address = address; this.nonce = (int) (Math.random() * 1000000); } // getters } To store users, for simplicity, I’ll be using an in-memory Map with a method to retrieve User by address, creating a new User instance in case the value is missing: Java @Repository public class UserRepository { private final Map<String, User> users = new ConcurrentHashMap<>(); public User getUser(String address) { return users.computeIfAbsent(address, User::new); } } Let's define a controller allowing users to fetch nonce by their public address: Java @RestController public class NonceController { @Autowired private UserRepository userRepository; @GetMapping("/nonce/{address}") public ResponseEntity<Integer> getNonce(@PathVariable String address) { User user = userRepository.getUser(address); return ResponseEntity.ok(user.getNonce()); } } Step 3: Authentication Filter To implement a custom authentication mechanism with Spring Security, first, we need to define our AuthenticationFilter. Spring filters are designed to intercept requests for certain URLs and perform some actions. Each filter in the chain can process the request, pass it to the next filter in the chain, or not pass it, immediately sending a response to the client. Java public class MetaMaskAuthenticationFilter extends AbstractAuthenticationProcessingFilter { protected MetaMaskAuthenticationFilter() { super(new AntPathRequestMatcher("/login", "POST")); } @Override public Authentication attemptAuthentication(HttpServletRequest request, HttpServletResponse response) throws AuthenticationException { UsernamePasswordAuthenticationToken authRequest = getAuthRequest(request); authRequest.setDetails(this.authenticationDetailsSource.buildDetails(request)); return this.getAuthenticationManager().authenticate(authRequest); } private UsernamePasswordAuthenticationToken getAuthRequest(HttpServletRequest request) { String address = request.getParameter("address"); String signature = request.getParameter("signature"); return new MetaMaskAuthenticationRequest(address, signature); } } Our MetaMaskAuthenticationFilter will intercept requests with the POST "/login" pattern. In the attemptAuthentication(HttpServletRequest request, HttpServletResponse response) method, we extract address and signature parameters from the request. Next, these values are used to create an instance of MetaMaskAuthenticationRequest, which we pass as a login request to the authentication manager: Java public class MetaMaskAuthenticationRequest extends UsernamePasswordAuthenticationToken { public MetaMaskAuthenticationRequest(String address, String signature) { super(address, signature); super.setAuthenticated(false); } public String getAddress() { return (String) super.getPrincipal(); } public String getSignature() { return (String) super.getCredentials(); } } Step 4: Authentication Provider Our MetaMaskAuthenticationRequest should be processed by a custom AuthenticationProvider, where we can validate the user's signature and return a fully authenticated object. Let’s create an implementation of AbstractUserDetailsAuthenticationProvider, which is designed to work with UsernamePasswordAuthenticationToken instances: Java @Component public class MetaMaskAuthenticationProvider extends AbstractUserDetailsAuthenticationProvider { @Autowired private UserRepository userRepository; @Override protected UserDetails retrieveUser(String username, UsernamePasswordAuthenticationToken authentication) throws AuthenticationException { MetaMaskAuthenticationRequest auth = (MetaMaskAuthenticationRequest) authentication; User user = userRepository.getUser(auth.getAddress()); return new MetaMaskUserDetails(auth.getAddress(), auth.getSignature(), user.getNonce()); } @Override protected void additionalAuthenticationChecks(UserDetails userDetails, UsernamePasswordAuthenticationToken authentication) throws AuthenticationException { MetaMaskAuthenticationRequest metamaskAuthenticationRequest = (MetaMaskAuthenticationRequest) authentication; MetaMaskUserDetails metamaskUserDetails = (MetaMaskUserDetails) userDetails; if (!isSignatureValid(authentication.getCredentials().toString(), metamaskAuthenticationRequest.getAddress(), metamaskUserDetails.getNonce())) { logger.debug("Authentication failed: signature is not valid"); throw new BadCredentialsException("Signature is not valid"); } } ... } The first method, retrieveUser(String username, UsernamePasswordAuthenticationToken authentication) should load the User entity from our UserRepository and compose the UserDetails instance containing address, signature, and nonce: Java public class MetaMaskUserDetails extends User { private final Integer nonce; public MetaMaskUserDetails(String address, String signature, Integer nonce) { super(address, signature, Collections.emptyList()); this.nonce = nonce; } public String getAddress() { return getUsername(); } public Integer getNonce() { return nonce; } } The second method, additionalAuthenticationChecks(UserDetails userDetails, UsernamePasswordAuthenticationToken authentication) will do the signature verification using the Elliptic Curve Digital Signature Algorithm (ECDSA). The idea of this algorithm is to recover the wallet address from a given message and signature. If the recovered address matches our address from MetaMaskUserDetails, then the user can be authenticated. 1. Get the message hash by adding a prefix to make the calculated signature recognizable as an Ethereum signature: Java String prefix = "\u0019Ethereum Signed Message:\n" + message.length(); byte[] msgHash = Hash.sha3((prefix + message).getBytes()); 2. Extract the r, s and v components from the Ethereum signature and create a SignatureData instance: Java byte[] signatureBytes = Numeric.hexStringToByteArray(signature); byte v = signatureBytes[64]; if (v < 27) {v += 27;} byte[] r = Arrays.copyOfRange(signatureBytes, 0, 32); byte[] s = Arrays.copyOfRange(signatureBytes, 32, 64); Sign.SignatureData data = new Sign.SignatureData(v, r, s); 3. Using the method Sign.recoverFromSignature(), retrieve the public key from the signature: Java BigInteger publicKey = Sign.signedMessageHashToKey(msgHash, sd); 4. Finally, get the wallet address and compare it with the initial address: Java String recoveredAddress = "0x" + Keys.getAddress(publicKey); if (address.equalsIgnoreCase(recoveredAddress)) { // Signature is valid. } else { // Signature is not valid. } There is a complete implementation of isSignatureValid(String signature, String address, Integer nonce) method with nonce: Java public boolean isSignatureValid(String signature, String address, Integer nonce) { // Compose the message with nonce String message = "Signing a message to login: %s".formatted(nonce); // Extract the ‘r’, ‘s’ and ‘v’ components byte[] signatureBytes = Numeric.hexStringToByteArray(signature); byte v = signatureBytes[64]; if (v < 27) { v += 27; } byte[] r = Arrays.copyOfRange(signatureBytes, 0, 32); byte[] s = Arrays.copyOfRange(signatureBytes, 32, 64); Sign.SignatureData data = new Sign.SignatureData(v, r, s); // Retrieve public key BigInteger publicKey; try { publicKey = Sign.signedPrefixedMessageToKey(message.getBytes(), data); } catch (SignatureException e) { logger.debug("Failed to recover public key", e); return false; } // Get recovered address and compare with the initial address String recoveredAddress = "0x" + Keys.getAddress(publicKey); return address.equalsIgnoreCase(recoveredAddress); } Step 5: Security Configuration In the Security Configuration, besides the standard formLogin setup, we need to insert our MetaMaskAuthenticationFilter into the filter chain before the default: Java @Bean public SecurityFilterChain filterChain(HttpSecurity http, AuthenticationManager authenticationManager) throws Exception { return http .authorizeHttpRequests(customizer -> customizer .requestMatchers(HttpMethod.GET, "/nonce/*").permitAll() .anyRequest().authenticated()) .formLogin(customizer -> customizer.loginPage("/login") .failureUrl("/login?error=true") .permitAll()) .logout(customizer -> customizer.logoutUrl("/logout")) .csrf(AbstractHttpConfigurer::disable) .addFilterBefore(authenticationFilter(authenticationManager), UsernamePasswordAuthenticationFilter.class) .build(); } private MetaMaskAuthenticationFilter authenticationFilter(AuthenticationManager authenticationManager) { MetaMaskAuthenticationFilter filter = new MetaMaskAuthenticationFilter(); filter.setAuthenticationManager(authenticationManager); filter.setAuthenticationSuccessHandler(new MetaMaskAuthenticationSuccessHandler(userRepository)); filter.setAuthenticationFailureHandler(new SimpleUrlAuthenticationFailureHandler("/login?error=true")); filter.setSecurityContextRepository(new HttpSessionSecurityContextRepository()); return filter; } To prevent replay attacks in case the user’s signature gets compromised, we will create the AuthenticationSuccessHandler implementation, in which we change the user’s nonce and make the user sign the message with a new nonce next login: Java public class MetaMaskAuthenticationSuccessHandler extends SimpleUrlAuthenticationSuccessHandler { private final UserRepository userRepository; public MetaMaskAuthenticationSuccessHandler(UserRepository userRepository) { super("/"); this.userRepository = userRepository; } @Override public void onAuthenticationSuccess(HttpServletRequest request, HttpServletResponse response, Authentication authentication) throws ServletException, IOException { super.onAuthenticationSuccess(request, response, authentication); MetaMaskUserDetails principal = (MetaMaskUserDetails) authentication.getPrincipal(); User user = userRepository.getUser(principal.getAddress()); user.changeNonce(); } } Java public class User { ... public void changeNonce() { this.nonce = (int) (Math.random() * 1000000); } } We also need to configure the AuthenticationManager bean injecting our MetaMaskAuthenticationProvider: Java @Bean public AuthenticationManager authenticationManager(List<AuthenticationProvider> authenticationProviders) { return new ProviderManager(authenticationProviders); } Step 6: Templates Java @Controller public class WebController { @RequestMapping("/") public String root() { return "index"; } @RequestMapping("/login") public String login() { return "login"; } } Our WebController contains two templates: login.html and index.html: 1. The first template will be used to authenticate with MetaMask. To prompt a user to connect to MetaMask and receive a wallet address, we can use the eth_requestAccounts method: JavaScript const accounts = await window.ethereum.request({method: 'eth_requestAccounts'}); const address = accounts[0]; Next, having connected the MetaMask and received the nonce from the back end, we request the MetaMask to sign a message using the personal_sign method: JavaScript const nonce = await getNonce(address); const message = `Signing a message to login: ${nonce}`; const signature = await window.ethereum.request({method: 'personal_sign', params: [message, address]}); Finally, we send the calculated signature with the address to the back end. There is a complete template templates/login.html: HTML <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:th="http://www.thymeleaf.org" lang="en"> <head> <title>Login page</title> <meta charset="utf-8"/> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <link href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-/Y6pD6FV/Vv2HJnA6t+vslU6fwYXjCFtcEpHbNJ0lyAFsXTsjBbfaDjzALeQsN6M" crossorigin="anonymous"> <link href="https://getbootstrap.com/docs/4.0/examples/signin/signin.css" rel="stylesheet" crossorigin="anonymous"/> </head> <body> <div class="container"> <div class="form-signin"> <h3 class="form-signin-heading">Please sign in</h3> <p th:if="${param.error}" class="text-danger">Invalid signature</p> <button class="btn btn-lg btn-primary btn-block" type="submit" onclick="login()">Login with MetaMask</button> </div> </div> <script th:inline="javascript"> async function login() { if (!window.ethereum) { console.error('Please install MetaMask'); return; } // Prompt user to connect MetaMask const accounts = await window.ethereum.request({method: 'eth_requestAccounts'}); const address = accounts[0]; // Receive nonce and sign a message const nonce = await getNonce(address); const message = `Signing a message to login: ${nonce}`; const signature = await window.ethereum.request({method: 'personal_sign', params: [message, address]}); // Login with signature await sendLoginData(address, signature); } async function getNonce(address) { return await fetch(`/nonce/${address}`) .then(response => response.text()); } async function sendLoginData(address, signature) { return fetch('/login', { method: 'POST', headers: {'content-type': 'application/x-www-form-urlencoded'}, body: new URLSearchParams({ address: encodeURIComponent(address), signature: encodeURIComponent(signature) }) }).then(() => window.location.href = '/'); } </script> </body> </html> 2. The second templates/index.html template will be protected by our Spring Security configuration, displaying the Principal name as the wallet address after the person gets signed up: HTML <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:th="http://www.thymeleaf.org" xmlns:sec="http://www.thymeleaf.org/extras/spring-security" lang="en"> <head> <title>Spring Authentication with MetaMask</title> <meta charset="utf-8"/> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <link href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-/Y6pD6FV/Vv2HJnA6t+vslU6fwYXjCFtcEpHbNJ0lyAFsXTsjBbfaDjzALeQsN6M" crossorigin="anonymous"> <link href="https://getbootstrap.com/docs/4.0/examples/signin/signin.css" rel="stylesheet" crossorigin="anonymous"/> </head> <body> <div class="container" sec:authorize="isAuthenticated()"> <form class="form-signin" method="post" th:action="@{/logout}"> <h3 class="form-signin-heading">This is a secured page!</h3> <p>Logged in as: <span sec:authentication="name"></span></p> <button class="btn btn-lg btn-secondary btn-block" type="submit">Logout</button> </form> </div> </body> </html> The full source code is provided on GitHub. In this article, we developed an alternative authentication mechanism with Spring Security and MetaMask using asymmetric encryption. This method can fit into your application, but only if your target audience is using cryptocurrency and has the MetaMask extension installed in their browser.
In today's increasingly digital world, securing your applications has become paramount. As developers, we must ensure that our applications are protected from unauthorized access and malicious attacks. One popular solution for securing Java applications is Spring Security, a comprehensive and customizable framework that provides authentication, authorization, and protection against various security threats. In this article, we will explore the basics of Spring Security and walk through a real-world example to demonstrate how it can be implemented in your application. By the end of this article, you should have a better understanding of the benefits of using Spring Security and how to utilize its features effectively. Overview of Spring Security Spring Security is a powerful and flexible framework designed to protect Java applications. It integrates seamlessly with the Spring ecosystem, providing a wide range of features, including: Authentication: Verifying the identity of users attempting to access your application. Authorization: Ensuring that authenticated users have the necessary permissions to access specific resources or perform certain actions. CSRF protection: Defending your application against Cross-Site Request Forgery (CSRF) attacks. Session management: Controlling user sessions, including session timeouts and concurrent session limits. Real-World Example: Secure Online Banking Application To illustrate the use of Spring Security, let's consider a simple online banking application. This application allows users to view their account balances, transfer money between accounts, and manage their personal information. Step 1: Set Up Spring Security Dependencies First, we need to include the necessary dependencies in our project's build file. For a Maven project, add the following to your pom.xml: Groovy dependencies { implementation 'org.springframework.boot:spring-boot-starter-security' } Step 2: Configure Spring Security Next, we need to create a configuration class that extends WebSecurityConfigurerAdapter. This class will define the security rules for our application. Java @Configuration @EnableWebSecurity public class SecurityConfig extends WebSecurityConfigurerAdapter { @Autowired private UserDetailsService userDetailsService; @Override protected void configure(HttpSecurity http) throws Exception { http .csrf().disable() .authorizeRequests() .antMatchers("/login", "/register").permitAll() .antMatchers("/account/**").hasRole("USER") .antMatchers("/transfer/**").hasRole("USER") .anyRequest().authenticated() .and() .formLogin() .loginPage("/login") .defaultSuccessURL("/dashboard") .and() .logout() .logoutUrl("/logout") .logoutSuccessUrl("/login"); } @Bean public PasswordEncoder passwordEncoder() { return new BCryptPasswordEncoder(); } @Override protected void configure(AuthenticationManagerBuilder auth) throws Exception { auth.userDetailsService(userDetailsService).passwordEncoder(passwordEncoder()); } } In this configuration, we have defined the following rules: All users can access the login and registration pages. Only authenticated users with the "USER" role can access the account and transfer-related pages. All other requests require authentication. Custom login and logout URLs are specified. A password encoder is configured to use BCrypt for hashing user passwords. Step 3: Implement UserDetailsService Now, we need to create a custom UserDetailsService implementation that retrieves user details from our data source (e.g., a database). Java @Service public class CustomUserDetailsService implements UserDetailsService { @Autowired private UserRepository userRepository; @Override public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException { User user = userRepository.findByUsername(username); if (user == null) { throw new UsernameNotFoundException("User not found"); } return new org.springframework.security.core.userdetails.User(user.getUsername(), user.getPassword(), getAuthorities(user)); } private Collection<? extends GrantedAuthority> getAuthorities(User user) { return user.getRoles().stream() .map(role -> new SimpleGrantedAuthority("ROLE_" + role)) .collect(Collectors.toList()); } } This implementation queries the UserRepository to find a User entity by its username. If the user is found, it returns a Spring Security UserDetails object with the user's details and authorities (roles). Conclusion In this article, we introduced Spring Security and demonstrated its use in a simple online banking application. By leveraging the features provided by Spring Security, developers can effectively secure their applications against unauthorized access and common security threats. While this example only scratches the surface of what Spring Security has to offer, it serves as a starting point for further exploration. As you continue to work with Spring Security, you will discover more advanced features and customization options that can help you tailor the framework to your specific needs.
Learn how to record SSH sessions on a Red Hat Enterprise Linux VSI in a Private VPC network using in-built packages. The VPC private network is provisioned through Terraform and the RHEL packages are installed using Ansible automation. What Is Session Recording and Why Is It Required? As noted in "Securely record SSH sessions on RHEL in a private VPC network," a Bastion host and a jump server are both security mechanisms used in network and server environments to control and enhance security when connecting to remote systems. They serve similar purposes but have some differences in their implementation and use cases. The Bastion host is placed in front of the private network to take SSH requests from public traffic and pass the request to the downstream machine. Bastion hosts and jump servers are vulnerable to intrusion as they are exposed to public traffic. Session recording helps an administrator of a system to audit user SSH sessions and comply with regulatory requirements. In the event of a security breach, you as an administrator would like to audit and analyze the user sessions. This is critical for a security-sensitive system. Before deploying the session recording solution, you need to provision a private VPC network following the instructions in the article, "Architecting a Completely Private VPC Network and Automating the Deployment." Alternatively, if you are planning to use your own VPC infrastructure, you need to attach a floating IP to the virtual server instance and a public gateway to each of the subnets. Additionally, you need to allow network traffic from public internet access. Deploy Session Recording Using Ansible To be able to deploy the Session Recording solution you need to have the following packages installed on the RHEL VSI: tlog SSSD cockpit-session-recording The packages will be installed through Ansible automation on all the VSIs both bastion hosts and RHEL VSI. If you haven't done so yet, clone the GitHub repository and move to the Ansible folder. Shell git clone https://github.com/VidyasagarMSC/private-vpc-network cd ansible Create hosts.ini from the template file. Shell cp hosts_template.ini hosts.ini Update the hosts.ini entries as per your VPC IP addresses. Plain Text [bastions] 10.10.0.13 10.10.65.13 [servers] 10.10.128.13 [bastions:vars] ansible_port=22 ansible_user=root ansible_ssh_private_key_file=/Users/vmac/.ssh/ssh_vpc packages="['tlog','cockpit-session-recording','systemd-journal-remote']" [servers:vars] ansible_port=22 ansible_user=root ansible_ssh_private_key_file=/Users/vmac/.ssh/ssh_vpc ansible_ssh_common_args='-J root@10.10.0.13' packages="['tlog','cockpit-session-recording','systemd-journal-remote']" Run the Ansible playbook to install the packages from an IBM Cloud private mirror/repository. Shell ansible-playbook main_playbook.yml -i hosts.ini --flush-cache Running Ansible playbooks You can see in the image that after you SSH into the RHEL machine now, you will see a note saying that the current session is being recorded. Check the Session Recordings, Logs, and Reports If you closely observe the messages post SSH, you will see a URL to the web console that can be accessed using the machine name or private IP over port 9090. To allow traffic on port 9090, in the Terraform code, Change the value of the allow_port_9090 variable to true and run terraform apply. The latest terraform apply will add ACL and security group rules to allow traffic on port 9090. Now, open a browser and navigate to http://10.10.128.13:9090 . To access using the VSI name, you need to set up a private DNS (out of scope for this article). You need a root password to access the web console. RHEL web console Navigate to session recording to see the list of session recordings. Along with session recordings, you can check the logs, diagnostic reports, etc. Session recording on the Web console Recommended Reading How to use Schematics - Terraform UI to provision the cloud resources Automation, Ansible, AI
Apostolos Giannakidis
Product Security,
Microsoft
Samir Behara
Senior Cloud Infrastructure Architect,
AWS
Boris Zaikin
Lead Solution Architect,
CloudAstro GmBH
Anca Sailer
Distinguished Engineer,
IBM