Containers allow applications to run quicker across many different development environments, and a single container encapsulates everything needed to run an application. Container technologies have exploded in popularity in recent years, leading to diverse use cases as well as new and unexpected challenges. This Zone offers insights into how teams can solve these challenges through its coverage of container performance, Kubernetes, testing, container orchestration, microservices usage to build and deploy containers, and more.
CI/CD Docker: How To Create a CI/CD Pipeline With Jenkins, Containers, and Amazon ECS
How To Deploy Helidon Application to Kubernetes With Kubernetes Maven Plugin
Docker is an open-source platform for building, shipping, and running applications in containers. Containers provide a lightweight and portable way to package and deploy software, making it easier to move applications between environments and platforms. By using Docker to containerize your database application, you can ensure that it runs consistently across different environments, making it easier to deploy and manage. In this tutorial, we will walk through the process of containerizing a MySQL database using Docker and connecting to it using DbVisualizer. We will start with a simple example and then move on to more complex scenarios, including using Docker Compose to orchestrate multiple containers and using environment variables to configure our container. Containerizing MySQL With Docker and DbVisualizer Prerequisites To follow this tutorial, you will need: Docker DbVisualizer A text editor Getting Started Let's start by creating a simple Dockerfile for our MySQL database. The Dockerfile specifies the base image to use, any additional software packages to install, and any files to copy into the container. Create a new file named Dockerfile with the following contents: MySQL FROM mysql:latest ENV MYSQL_ROOT_PASSWORD=password COPY my-database.sql /docker-entrypoint-initdb.d/ This Dockerfile uses the official MySQL Docker image as the base image, sets the root password to "password" using the ENV instruction, and copies a SQL script named my-database.sql to the /docker-entrypoint-initdb.d/ directory inside the container. This script will be executed when the container starts up, creating our database and any tables or data we need. Save the Dockerfile to a new directory named my-database. Next, let's build our Docker image using the docker build command: MySQL docker build -t my-database . This command builds a Docker image with the name my-database using the Dockerfile in the current directory. Now that we have our Docker image let's start a container from it using the docker run command: MySQL docker run -p 3306:3306 --name my-database-container -d my-database This command starts a container named my-database-container from the my-database image, maps port 3306 to the host, and runs the container in a detached mode. Connecting to the MySQL Database With DbVisualizer With our container up and running, let's connect to our MySQL database using DbVisualizer. Open DbVisualizer and go to the Connection tab. Click the "Create a Connection" button to create a new connection. Creating a database connection in DbVisualizer. Select your database server type. For this tutorial, we will be choosing MySQL 8(Connector/J) as the driver. Choosing the database driver in DbVisualizer. In the Driver Properties tab, select MySQL and enter the following information: Database server: localhost Database Port: 3306 Database UserId: root Database Password: password( the password you set in the MySQL deployment YAML file) Connection details for MySQL database server in DbVisualizer. Click the "Connect" button to test the connection. If the connection is successful, you should see a message indicating that the connection was established. You can now browse the database and run queries using DbVisualizer. Successful connection message. Using Docker Compose With MySQL So far, we have only worked with a single container. In a real-world scenario, we may need to deploy multiple containers that work together to form a larger application. Docker Compose is a tool for defining and running multi-container Docker applications, making it easier to orchestrate multiple containers and manage dependencies between them. Let's create a Docker Compose file that defines our MySQL database container and a web application container that depends on it. Create a new file named docker-compose.yml with the following contents: MySQL version: "3" services: db: image: mysql:latest environment: MYSQL_ROOT_PASSWORD: password volumes: - ./my-database.sql:/docker-entrypoint-initdb.d/my-database.sql web: build: . ports: - "8000:8000" depends_on: - db This Docker Compose file defines two services: a MySQL database service named db and a web application service named web. The db service uses the official MySQL Docker image and sets the root password using an environment variable. It also mounts the “my-database.sql” file from the current directory as a volume so that it can be executed when the container starts up. The web service builds an image from the current directory and maps port 8000 to the host. It also depends on the db service so that the database is started before the web application. To start the containers using Docker Compose, run the following command in the same directory as the docker-compose.yml file: MySQL docker-compose up This command starts the containers defined in the Docker Compose file and outputs their logs to the console. Press Ctrl+C to stop the containers. To start the containers in detached mode, add the -d option: MySQL docker-compose up -d This command starts the containers in the background. MySQL Dockerfile Environment Variables In some cases, we may need to configure our container using environment variables. For example, we may want to specify the database name, username, and password as environment variables rather than hard-coding them in our Dockerfile. Let's modify our Dockerfile to use environment variables for the database name, username, and password. Create a new Dockerfile named Dockerfile-env with the following contents: MySQL FROM mysql:latest ENV MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} ENV MYSQL_DATABASE=${MYSQL_DATABASE} ENV MYSQL_USER=${MYSQL_USER} ENV MYSQL_PASSWORD=${MYSQL_PASSWORD} COPY my-database.sql /docker-entrypoint-initdb.d/ This Dockerfile uses environment variables to set the root password, database name, username, and password. The COPY instruction remains the same. Save the Dockerfile to a new directory named “my-database-env”. MySQL Docker Image To build the Docker image, we need to pass in the values of the environment variables. We can do this using the --build-arg option: MySQL docker build --build-arg MYSQL_ROOT_PASSWORD=password --build-arg MYSQL_DATABASE=my_database --build-arg MYSQL_USER=my_user --build-arg MYSQL_PASSWORD=my_password -t my-database-env . This command builds a Docker image with the name `my-database-env` using the Dockerfile in the current directory, passing in the values of the environment variables using the `--build-arg` option. To start a container from the image, use the same docker run command as before: MySQL docker run -p 3306:3306 --name my-database-container -d -e MYSQL_ROOT_PASSWORD=password -e MYSQL_DATABASE=my_database -e MYSQL_USER=my_user -e MYSQL_PASSWORD=my_password my-database-env This command starts a container named my-database-container from the my-database-env image, maps port 3306 to the host, and sets the values of the environment variables using the -e option. Conclusion In this tutorial, we walked through the process of containerizing a MySQL database using Docker and connecting to it using DbVisualizer. We started with a simple example and then moved on to more complex scenarios, including using Docker Compose to orchestrate multiple containers and using environment variables to configure our container. By using Docker to containerize our database application, we can guarantee that it will function uniformly across various environments, which makes it simpler to deploy and manage. Additionally, using DbVisualizer to connect to our containerized database allows us to explore the database and execute queries just as we would with a typical database. Docker and DbVisualizer are powerful tools that can simplify the process of developing, deploying, and managing database applications. By combining these tools, we can create a seamless development and deployment workflow that ensures consistency and reliability across all environments. I hope this tutorial has been helpful in getting you started with Docker and DbVisualizer. If you have any questions or feedback, feel free to leave a comment below. Happy coding! FAQs(Frequently Asked Questions) 1. What is Docker, and why should I containerize my database? Docker is an open-source platform for building, shipping, and running applications in containers. Containerizing your database provides a lightweight and portable way to package and deploy your software, ensuring consistent performance across different environments. 2. How do I containerize a MySQL database with Docker? To containerize a MySQL database, create a Dockerfile specifying the base image, set environment variables, and copy SQL scripts into the container. Build the Docker image and run a container from it using Docker commands. 3. How do I connect to a containerized MySQL database with DbVisualizer? Open DbVisualizer, create a new database connection, select the MySQL driver, and enter the connection details (server, port, user, password). Test the connection, and if successful, you can browse the database and execute queries in DbVisualizer. 4. What is Docker Compose, and how can I use it with MySQL? Docker Compose is a tool for defining and running multi-container Docker applications. It helps orchestrate multiple containers and manage dependencies. To use Docker Compose with MySQL, define services for the database and any related services in a docker-compose.yml file and run the containers using the `docker-compose up` command. 5. How can I use environment variables in my MySQL Docker image? To use environment variables in a MySQL Docker image, modify the Dockerfile to include `ENV` instructions for the desired variables. Then, pass the values of the environment variables during the image build or container run using the `-e` or `--build-arg` options.
Containerization is rapidly gaining popularity for becoming the sure-shot way to deploy and manage applications. And Docker has been the ultimate poster boy of this technology for the past few years. Docker is a widely adopted containerization platform that allows developers to package applications and their dependencies into a single container image. It simplifies application deployment and scaling, making it popular in various industries. However, as the containerization ecosystem expands, other solutions have emerged to address specific use cases and requirements. This article explores the top Docker alternatives developers that DevOps professionals should consider when choosing a containerization solution for their projects. 1. Kubernetes: Orchestrating the Future As Docker's popularity grew, so did the demand for a more robust orchestration system. Kubernetes stepped up to the challenge and quickly became the gold standard for container management. Kubernetes is a sophisticated platform for automating application container deployment, scaling, and operations. Google created it and is currently managed by the Cloud Native Computing Foundation (CNCF). Kubernetes' ability to manage containers across multiple hosts and an extensive ecosystem of tools and services has propelled Kubernetes to the forefront of container orchestration solutions. 2. Podman: A Secure and Lightweight Alternative While Docker has proven to be a versatile containerization tool, some have raised concerns about its security model and resource usage. Enter Podman, an innovative alternative that addresses these issues head-on. Podman offers a rootless container engine, which means containers run as regular users, bolstering security and eliminating the need for a daemon. Additionally, Podman boasts seamless Docker compatibility, making migration effortless. If you're looking for a lightweight and secure option to run containers individually or in pods, Podman might be the answer. 3. OpenShift: Enterprise-Grade Kubernetes For organizations seeking an enterprise-grade Kubernetes solution, OpenShift emerges as a top choice. Red Hat's OpenShift, built on top of Kubernetes, enhances its capabilities with enhanced features geared to the demands of large-scale deployments. OpenShift enables teams to design, deploy, and manage applications by emphasizing security, multi-tenancy, and developer-friendly tools. Whether you're using a public or private cloud, OpenShift provides all the necessary tools to make the most out of Kubernetes in a business setting. 4. rkt (Rocket): Fortifying Container Security rkt, pronounced "rocket," is an open-source container runtime developed by CoreOS (now part of Red Hat). It focuses on security, simplicity, and composability. One of its unique features is "rkt fly," which enables users to run containers without requiring a central daemon effortlessly. Moreover, rkt uses an industry-standard container image format called "App Container" (ACI), promoting compatibility and portability across various container runtimes. While not as widely adopted as Docker, rkt remains a solid choice for those who prioritize security and adherence to open standards. 5. OpenVZ OpenVZ is an open-source, container-based virtualization solution that operates on the Linux kernel. Similar to LXD, OpenVZ emphasizes system containers, providing users with isolated environments that share the same kernel as the host system. This approach results in efficient resource utilization and lower overhead compared to total virtualization solutions. OpenVZ excels in hosting multiple containers with minimal performance impact, making it a solid option for environments requiring a high-density container solution. 6. Amazon ECS Amazon Elastic Container Service (ECS) is a container orchestration service that enables users to run and scale containerized applications on the AWS cloud infrastructure. While AWS ECS can use Docker as its container runtime, it also supports other container runtimes such as containerd. ECS takes care of underlying infrastructure management, allowing developers to focus solely on deploying and managing their applications. This makes ECS an attractive choice for organizations already invested in the AWS ecosystem or seeking a hassle-free managed container solution. 7. Google Kubernetes Engine (GKE) Google Kubernetes Engine, or GKE, is a service that Google Cloud Platform (GCP) provides to manage and control Kubernetes clusters easily. Like Amazon ECS, GKE allows developers to deploy and manage containerized applications using Kubernetes, Google's powerful container orchestration system. GKE abstracts the underlying infrastructure complexities, providing a seamless and scalable container management experience on the GCP cloud. With GKE, organizations can leverage the robustness and scalability of Kubernetes while benefiting from Google's cloud services and advanced machine learning capabilities. 8. Apache Mesos Apache Mesos is a distributed systems kernel that abstracts resources from physical or virtual machines, creating a unified pool of resources for applications to utilize. Mesos can manage both containerized and non-containerized applications, making it a versatile option for organizations with diverse workloads. It supports Docker containers, as well as other container runtimes, and provides powerful scheduling and resource allocation capabilities. Overall, Apache Mesos is beneficial for large-scale, data-intensive applications that require efficient resource utilization and robust fault tolerance. 9. Nomad HashiCorp's Nomad is a straightforward and adaptable workload orchestrator that can deploy and manage containerized and non-containerized applications across any infrastructure. It supports various container runtimes, including Docker, containerd, and rkt, giving users the freedom to choose the best containerization technology for their needs. Nomad is known for its ease of use and minimal setup overhead, making it an excellent alternative to Docker for organizations looking for a lightweight yet robust container orchestration solution. 10. OpenShift OpenShift is a robust and enterprise-grade Kubernetes platform that offers different extra features for building, deploying, and managing containerized applications. It was developed by Red Hat. It offers a developer-friendly environment with features like source-to-image (S2I) builds, which enable developers to convert source code into container images effortlessly. OpenShift's integrated developer tools and automated workflows make it an attractive option for organizations seeking end-to-end containerization solutions with extensive developer support. Wrapping Up Docker has revolutionized containerization, but exploring emerging technologies that align with your organization's needs is vital. These Docker alternatives offer diverse containerization options, addressing security, compatibility, performance, and simplicity. Understanding and evaluating these alternatives will aid in making informed decisions when deploying and managing containerized applications. Embrace the evolving technology landscape to embark on a containerization journey that suits your development and deployment requirements. So, don't be afraid to explore and experiment with the containerization universe. Choosing wisely will empower your organization to leverage the full potential of containers and ensure efficient application deployment. Keep adapting to stay at the forefront of containerization trends, ensuring optimal productivity and success.
In part three of this series, we have seen how to deploy our Quarkus/Camel-based microservices in Minikube, which is one of the most commonly used Kubernetes local implementations. While such a local Kubernetes implementation is very practical for testing purposes, its single-node feature doesn't satisfy real production environment requirements. Hence, in order to check our microservices behavior in a production-like environment, we need a multi-node Kubernetes implementation. And one of the most common is OpenShift. What Is OpenShift? OpenShift is an open-source, enterprise-grade platform for container application development, deployment, and management based on Kubernetes. Developed by Red Hat as a component layer on top of a Kubernetes cluster, it comes both as a commercial product and a free platform or both as on-premise and cloud infrastructure. The figure below depicts this architecture. As with any Kubernetes implementation, OpenShift has its complexities, and installing it as a standalone on-premise platform isn't a walk in the park. Using it as a managed platform on a dedicated cloud like AWS, Azure, or GCP is a more practical approach, at least in the beginning, but it requires a certain enterprise organization. For example, ROSA (Red Hat OpenShift Service on AWS) is a commercial solution that facilitates the rapid creation and the simple management of a full Kubernetes infrastructure, but it isn't really a developer-friendly environment allowing it to quickly develop, deploy and test cloud-native services. For this later use case, Red Hat offers the OpenShift Developer's Sandbox, a development environment that gives immediate access to OpenShift without any heavy installation or subscription process and where developers can start practicing their skills and learning cycle, even before having to work on real projects. This totally free service, which doesn't require any credit card but only a Red Hat account, provides a private OpenShift environment in a shared, multi-tenant Kubernetes cluster that is pre-configured with a set of developer tools, like Java, Node.js, Python, Go, C#, including a catalog of Helm charts, the s2i build tool, and OpenShift Dev Spaces. In this post, we'll be using OpenShift Developer's Sandbox to deploy our Quarkus/Camel microservices. Deploying on OpenShift In order to deploy on OpenShift, Quarkus applications need to include the OpenShift extension. This might be done using the Qurakus CLI, of course, but given that our project is a multi-module maven one, a more practical way of doing it is to directly include the following dependency in the master POM: XML <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-openshift</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-container-image-openshift</artifactId> </dependency> This way, all the sub-modules will inherit the dependencies. OpenShift is supposed to work with vanilla Kubernetes resources; hence, our previous recipe, where we deployed our microservices on Minikube, should also apply here. After all, both Minikube and OpenShift are implementations of the same de facto standard: Kubernetes. If we look back at part three of this series, our Jib-based build and deploy process was generating vanilla Kubernetes manifest files (kubernetes.yaml), as well as Minikube ones (minikube.yaml). Then, we had the choice between using the vanilla-generated Kubernetes resources or the more specific Minikube ones, and we preferred the latter alternative. While the Minikube-specific manifest files could only work when deployed on Minikube, the vanilla Kubernetes ones are supposed to work the same way on Minikube as well as on any other Kubernetes implementation, like OpenShift. However, in practice, things are a bit more complicated, and, as far as I'm concerned, I failed to successfully deploy on OpenShift vanilla Kubernetes manifests generated by Jib. What I needed to do was to rename most of the properties whose names satisfy the pattern quarkus.kubernetes.* by quarkus.openshift.*. Also, some vanilla Kubernetes properties, for example quarkus.kubernetes.ingress.expose, have a completely different name for OpenShift. In this case quarkus.openshift.route.expose. But with the exception of these almost cosmetic alterations, everything remains on the same site as in our previous recipe of part three. Now, in order to deploy our microservices on OpenShift Developer's Sandbox, proceed as follows. Log in to OpenShift Developer's Sandbox Here are the required steps to log in to OpenShift Developer Sandbox: Fire your preferred browser and go to the OpenShift Developer's Sandbox site Click on the Login link in the upper right corner (you need to already have registered with the OpenShift Developer Sandbox) Click on the red button labeled Start your sandbox for free in the center of the screen In the upper right corner, unfold your user name and click on the Copy login command button In the new dialog labeled Log in with ... click on the DevSandbox link A new page is displayed with a link labeled Display Token. Click on this link. Copy and execute the displayed oc command, for example: Shell $ oc login --token=... --server=https://api.sandbox-m3.1530.p1.openshiftapps.com:6443 Clone the Project From GitHub Here are the steps required to clone the project's GitHub repository: Shell $ git clone https://github.com/nicolasduminil/aws-camelk.git $ cd aws-camelk $ git checkout openshift Create the OpenShift Secret In order to connect to AWS resources, like S3 buckets and SQS queues, we need to provide AWS credentials. These credentials are the Access Key ID and the Secret Access Key. There are several ways to provide these credentials, but here, we chose to use Kubernetes secrets. Here are the required steps: First, encode your Access Key ID and Secret Access Key in Base64 as follows: Shell $ echo -n <your AWS access key ID> | base64 $ echo -n <your AWS secret access key> | base64 Edit the file aws-secret.yaml and amend the following lines such that to replace ... by the Base64 encoded values: Shell AWS_ACCESS_KEY_ID: ... AWS_SECRET_ACCESS_KEY: ... Create the OpenShift secret containing the AWS access key ID and secret access key: Shell $ kubectl apply -f aws-secret.yaml Start the Microservices In order to start the microservices, run the following script: Shell $ ./start-ms.sh This script is the same as the one in our previous recipe in part three: Shell #!/bin/sh ./delete-all-buckets.sh ./create-queue.sh sleep 10 mvn -DskipTests -Dquarkus.kubernetes.deploy=true clean install sleep 3 ./copy-xml-file.sh The copy-xml-file.sh script that is used here in order to trigger the Camel file poller has been amended slightly: Shell #!/bin/sh aws_camel_file_pod=$(oc get pods | grep aws-camel-file | grep -wv -e build -e deploy | awk '{print $1}') cat aws-camelk-model/src/main/resources/xml/money-transfers.xml | oc exec -i $aws_camel_file_pod -- sh -c "cat > /tmp/input/money-transfers.xml" Here, we replaced the kubectl commands with the oc ones. Also, given that OpenShift has this particularity of creating pods not only for the microservices but also for the build and the deploy commands, we need to filter out in the list of the running pods the ones having string occurrences of build and deploy. Running this script might take some time. Once finished, make sure that all the required OpenShift controllers are running: Shell $ oc get is NAME IMAGE REPOSITORY TAGS UPDATED aws-camel-file default-route-openshift-image-registry.apps.sandbox-m3.1530.p1.openshiftapps.com/nicolasduminil-dev/aws-camel-file 1.0.0-SNAPSHOT 17 minutes ago aws-camel-jaxrs default-route-openshift-image-registry.apps.sandbox-m3.1530.p1.openshiftapps.com/nicolasduminil-dev/aws-camel-jaxrs 1.0.0-SNAPSHOT 9 minutes ago aws-camel-s3 default-route-openshift-image-registry.apps.sandbox-m3.1530.p1.openshiftapps.com/nicolasduminil-dev/aws-camel-s3 1.0.0-SNAPSHOT 16 minutes ago aws-camel-sqs default-route-openshift-image-registry.apps.sandbox-m3.1530.p1.openshiftapps.com/nicolasduminil-dev/aws-camel-sqs 1.0.0-SNAPSHOT 13 minutes ago openjdk-11 default-route-openshift-image-registry.apps.sandbox-m3.1530.p1.openshiftapps.com/nicolasduminil-dev/openjdk-11 1.10,1.10-1,1.10-1-source,1.10-1.1634738701 + 46 more... 18 minutes ago $ oc get pods NAME READY STATUS RESTARTS AGE aws-camel-file-1-build 0/1 Completed 0 19m aws-camel-file-1-d72w5 1/1 Running 0 18m aws-camel-file-1-deploy 0/1 Completed 0 18m aws-camel-jaxrs-1-build 0/1 Completed 0 14m aws-camel-jaxrs-1-deploy 0/1 Completed 0 10m aws-camel-jaxrs-1-pkf6n 1/1 Running 0 10m aws-camel-s3-1-76sqz 1/1 Running 0 17m aws-camel-s3-1-build 0/1 Completed 0 18m aws-camel-s3-1-deploy 0/1 Completed 0 17m aws-camel-sqs-1-build 0/1 Completed 0 17m aws-camel-sqs-1-deploy 0/1 Completed 0 14m aws-camel-sqs-1-jlgkp 1/1 Running 0 14m oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE aws-camel-jaxrs ClusterIP 172.30.192.74 <none> 80/TCP 11m modelmesh-serving ClusterIP None <none> 8033/TCP,8008/TCP,8443/TCP,2112/TCP 18h As shown in the listing above, all the required image streams have been created, and all the pods are either completed or running. The completed pods are the ones associated with the build and deploy operations. The running ones are associated with the microservices. There is only one service running: aws-camel-jaxrs. This service makes it possible to communicate with the pod that runs the aws-camel-jaxrs microservice by exposing the route to it. This is automatically done in effect to the quarkus.openshift.route.expose=true property. And the microservice aws-camel-sqs needs, as a matter of fact, to communicate with aws-camel-sqs and, consequently, it needs to know the route to it. To get this route, you may proceed as follows: Shell $ oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD aws-camel-jaxrs aws-camel-jaxrs-nicolasduminil-dev.apps.sandbox-m3.1530.p1.openshiftapps.com aws-camel-jaxrs http None Now open the application.properties file associated with the aws-camel-sqs microservice and modify the property rest-uri such that to read as follows: Properties files rest-uri=aws-camel-jaxrs-nicolasduminil-dev.apps.sandbox-m3.1530.p1.openshiftapps.com/xfer Here, you have to replace the namespace nicolasduminil-dev with the value which makes sense in your case. Now, you need to stop the microservices and start them again: Shell $ ./kill-ms.sh ... $ ./start-ms.sh ... Your microservices should run as expected now, and you may check the log files by using commands like: Shell $ oc logs aws-camel-jaxrs-1-pkf6n As you may see, in order to get the route to the aws-camel-jaxrs service, we need to start, to stop, and to start our microservices again. This solution is far from being elegant, but I didn't find any other, and I'm relying on the advised reader to help me improve it. It's probably possible to use the OpenShift Java client in order to perform, in Java code, the same thing as the oc get routes command is doing, but I didn't find how, and the documentation isn't too explicit. I would like to present my apologies for not being able to provide here the complete solution, but enjoy it nevertheless!
This GitHub Actions workflow builds a Docker image, tags it, and pushes it to one of three container registries. Here’s a Gist with the boilerplate code. Building Docker Images and Pushing to a Container Registry If you haven’t yet integrated GitHub Actions with your private container registry, this tutorial is a good place to start. The resulting workflow will log in to your private registry using the provided credentials, build existing Docker images by path, and push the resulting images to a container registry. We’ll discuss how to do this for GHCR, Docker Hub, and Harbor. Benefits and Use Cases Building and pushing Docker images using your CI/CD platform is a best practice. Here’s how it can improve your developer QoL: Shared builds: Streamline the process, configuration, and dependencies across all builds for easy reproducibility. Saves build minutes: Team members can access existing images instead of rebuilding from the source. Version control: Easily duplicate previous builds with image tags, allowing teams to trace and pinpoint bugs. Building a Docker Image Using GitHub Actions to automate Docker builds will ensure you keep your build config consistent. This only requires substituting your existing build command(s) into the workflow YAML. In this workflow, the image is named after your GitHub repo using the GITHUB_REPOSITORY environment variable as {{ github.repository }. YAML name: Build Docker image on: push: branches: - main jobs: build: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v3 - name: Build and tag image COMMIT_SHA=$(echo $GITHUB_SHA | cut -c1-7) run: docker build -t ${{ github.repository }:$COMMIT_SHA -f path/to/Dockerfile . Versioning Your Docker Image Tags Never rely on latest tags to version your images. We recommend choosing one of these two versioning conventions when tagging your images: using the GitHub commit hash or following the SemVer spec. Using the GitHub Hash GitHub Actions sets default environment variables that you can access within your workflow. Among these is GITHUB_SHA, which is the commit hash that triggered the workflow. This is a valuable versioning approach because you can trace each image back to its corresponding commit. In general, this convention uses the hash's first seven digits. Here's how we can access the variable and extract these digits: YAML - name: Build and tag image run: | COMMIT_SHA=$(echo $GITHUB_SHA | cut -c1-7) docker build -t ${{ github.repository }:$COMMIT_SHA -f path/to/Dockerfile . Semantic Versioning When using version numbers, it is best practice to follow the SemVer spec. This way, you can increment your version numbers following a consistent structure when releasing new updates and patches. Assuming you store your app’s version in a root file version.txt, you can extract the version number from this file and tag the image in two separate actions: YAML - name: Get version run: | export VERSION=$(cat version.txt) echo "Version: $VERSION" - name: Build and tag image run: docker build -t ${{ github.repository }:$VERSION -f path/to/Dockerfile . Pushing a Docker Image to a Container Registry You can easily build, tag, and push your Docker image to your private container registry of choice within only two or three actions. Here’s a high-level overview of what you’ll be doing: Manually set your authentication token or access credential(s) as repository secrets. Use the echo command to pipe credentials to standard input for registry authentication. This way, no action is required on the user’s part. Populate the workflow with your custom build command. Remember to follow your registry’s tagging convention. Add the push command. You can find the proper syntax in your registry's docs. You may prefer to split each item into its own action for better traceability on a workflow failure. Pushing to GHCR Step 1: Setting up GHCR Credentials In order to access the GitHub API, you’ll want to generate a personal access token. You can do this by going to Settings → Developer → New personal access token (classic) from where you’ll generate a custom token to allow package access. Make sure to select write:packages in the Select scopes section. Store this token as a repository secret called GHCR_TOKEN. Step 2: Action Recipe To Push to GHCR You can add the following actions to your GitHub Actions workflow. This code will log into GHCR, build, and push your Docker image. YAML - name: Log in to ghcr.io run: echo "${{ secrets.GHCR_TOKEN }" | docker login ghcr.io -u ${{ github.actor } --password-stdin - name: Build and tag image run: | COMMIT_SHA=$(echo $GITHUB_SHA | cut -c1-7) docker build -t ghcr.io/${{ github.repository_owner }/${{ github.repository }:$COMMIT_SHA -f path/to/Dockerfile . - name: Push image to GHCR run: docker push ghcr.io/${{ github.repository_owner }/${{ github.repository }:$COMMIT_SHA Pushing to Docker Hub Step 1: Store Your Docker Hub Credentials Using your Docker Hub login credentials, set the following repository secrets: DOCKERHUB_USERNAME DOCKERHUB_PASSWORD Note: You'll need to set up a repo on Docker Hub before you can push your image. Step 2: Action Recipe To Push to Docker Hub Adding these actions to your workflow will automate logging in to Docker Hub, building and tagging an image, and pushing it. YAML - name: Log in to Docker Hub run: | echo ${{ secrets.DOCKERHUB_PASSWORD } | docker login -u ${{ secrets.DOCKERHUB_USERNAME } --password-stdin - name: Build and tag image run: | COMMIT_SHA=$(echo $GITHUB_SHA | cut -c1-7) docker build -t ${{ secrets.DOCKERHUB_USERNAME }/${{ github.repository }:$COMMIT_SHA -f path/to/Dockerfile . - name: Push image to Docker Hub run: docker push ${{ secrets.DOCKERHUB_USERNAME }/${{ github.repository }:$COMMIT_SHA Pushing to Harbor Step 1: Store Your Harbor Access Credentials Create two new repository secrets to store the following info: HARBOR_CREDENTIALS: Your Harbor username and password formatted as username:password HARBOR_REGISTRY_URL: The URL corresponding to your personal Harbor registry Note: You'll need to create a Harbor project before you can push an image to Harbor. Step 2: Action Recipe To Push to Harbor The actions below will authenticate into Harbor, build and tag an image using Harbor-specific conventions, and push the image. YAML - name: Log in to Harbor run: | echo ${{ secrets.HARBOR_CREDENTIALS } | base64 --decode | docker login -u $(cut -d ':' -f1 <<< "${{ secrets.HARBOR_CREDENTIALS }") --password-stdin ${{ secrets.HARBOR_REGISTRY_URL } - name: Build and tag image run: | COMMIT_SHA=$(echo $GITHUB_SHA | cut -c1-7) docker build -t ${{ secrets.HARBOR_REGISTRY_URL }/project-name/${{ github.repository }:$COMMIT_SHA -f path/to/Dockerfile . - name: Push image to Harbor run: docker push ${{ secrets.HARBOR_REGISTRY_URL }/project-name/${{ github.repository }:$COMMIT_SHA Thanks for Reading! I hope you enjoyed today's featured recipes. I'm looking forward to sharing more easy ways you can automate repetitive tasks and chores with GitHub Actions.
In this blog, you will take a closer look at Podman Desktop, a graphical tool when you are working with containers. Enjoy! Introduction Podman is a container engine, just as Docker is. Podman commands are to be executed by means of a CLI (Command Line Interface), but it would come in handy when a GUI would be available. That is exactly the purpose of Podman Desktop! As stated on the Podman Desktop website: “Podman Desktop is an open source graphical tool enabling you to seamlessly work with containers and Kubernetes from your local environment.” In the next sections, you will execute most of the commands as executed in the two previous posts. If you are new to Podman, it is strongly advised to read those two posts first before continuing. Is Podman a Drop-in Replacement for Docker? Podman Equivalent for Docker Compose Sources used in this blog can be found on GitHub. Prerequisites Prerequisites for this blog are: Basic Linux knowledge, Ubuntu 22.04 is used during this blog; Basic Podman knowledge, see the previous blog posts; Podman version 3.4.4 is used in this blog because that is the version available for Ubuntu although the latest stable release is version 4.6.0 at the time of writing. Installation and Startup First of all, Podman Desktop needs to be installed, of course. Go to the downloads page. When using the Download button, a flatpak file will be downloaded. Flatpak is a framework for distributing desktop applications across various Linux distributions. However, this requires you to install flatpak. A tar.gz file is also available for download, so use this one. After downloading, extract the file to /opt: Shell $ sudo tar -xvf podman-desktop-1.2.1.tar.gz -C /opt/ In order to start Podman Desktop, you only need to double-click the podman-desktop file. The Get Started with Podman Desktop screen is shown. Click the Go to Podman Desktop button, which will open the Podman Desktop main screen. As you can see from the screenshot, Podman Desktop detects that Podman is running but also that Docker is running. This is already a nice feature because this means that you can use Podman Desktop for Podman as well as for Docker. At the bottom, a Docker Compatibility warning is shown, indicating that the Docker socket is not available and some Docker-specific tools will not function correctly. But this can be fixed, of course. In the left menu, you can find the following items from top to bottom: the dashboard, the containers, the pods, the images, and the volumes. Build an Image The container image you will try to build consists out of a Spring Boot application. It is a basic application containing one Rest endpoint, which returns a hello message. There is no need to build the application. You do need to download the jar-file and put it into a target directory at the root of the repository. The Dockerfile you will be using is located in the directory podman-desktop. Choose in the left menu the Images tab. Also note that in the screenshot, both Podman images and Docker images are shown. Click the Build an Image button and fill it in as follows: Containerfile path: select file podman-desktop/1-Dockerfile. Build context directory: This is automatically filled out for you with the podman-desktop directory. However, you need to change this to the root of the repository; otherwise, the jar-file is not part of the build context and cannot be found by Podman. Image Name: docker.io/mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT Container Engine: Podman Click the Build button. This results in the following error: Shell Uploading the build context from <user directory>/mypodmanplanet...Can take a while... Error:(HTTP code 500) server error - potentially insufficient UIDs or GIDs available in user namespace (requested 262143:262143 for /var/tmp/libpod_builder2108531042/bError:Error: (HTTP code 500) server error - potentially insufficient UIDs or GIDs available in user namespace (requested 262143:262143 for /var/tmp/libpod_builder2108531042/build/.git): Check /etc/subuid and /etc/subgid: lchown /var/tmp/libpod_builder2108531042/build/.git: invalid argument This error sounds familiar because the error was also encountered in a previous blog. Let’s try to build the image via the command line: Shell $ podman build . --tag docker.io/mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT -f podman-desktop/1-Dockerfile The image is built without any problem. An issue has been raised for this problem. At the time of writing, building an image via Podman Desktop is not possible. Start a Container Let’s see whether you can start the container. Choose in the left menu the Containers tab and click the Create a Container button. A choice menu is shown. Choose an Existing image. The Images tab is shown. Click the Play button on the right for the mypodmanplanet image. A black screen is shown, and no container is started. Start the container via CLI: Shell $ podman run -p 8080:8080 --name mypodmanplanet -d docker.io/mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT The running container is now visible in Podman Desktop. Test the endpoint, and this functions properly. Shell $ curl http://localhost:8080/hello Hello Podman! Same conclusion as for building the image. At the time of writing, it is not possible to start a container via Podman Desktop. What is really interesting is the actions menu. You can view the container logs. The Inspect tab shows you the details of the container. The Kube tab shows you what the Kubernetes deployment yaml file will look like. The Terminal tab gives you access to a terminal inside the container. You can also stop, restart, and remove the container from Podman Desktop. Although starting the container did not work, Podman Desktop offers some interesting features that make it easier to work with containers. Volume Mount Remove the container from the previous section. You will create the container again, but this time with a volume mount to a specific application.properties file, which will ensure that the Spring Boot application runs on port 8082 inside the container. Execute the following command from the root of the repository: Shell $ podman run -p 8080:8082 --volume ./properties/application.properties:/opt/app/application.properties:ro --name mypodmanplanet -d docker.io/mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT The container is started successfully, but an error message is shown in Podman Desktop. This error will show up regularly from now on. Restarting Podman Desktop resolves the issue. An issue has been filed for this problem. Unfortunately, the issue cannot be reproduced consistently. The volume is not shown in the Volumes tab, but that’s because it is an anonymous volume. Let’s create a volume and see whether this shows up in the Volumes tab. Shell $ podman volume create myFirstVolume myFirstVolume The volume is not shown in Podman Desktop. It is available via the command line, however. Shell $ podman volume ls DRIVER VOLUME NAME local myFirstVolume Viewing volumes is not possible with Podman Desktop at the time of writing. Delete the volume. Shell $ podman volume rm myFirstVolume myFirstVolume Create Pod In this section, you will create a Pod containing two containers. The setup is based on the one used for a previous blog. Choose in the left menu the Pods tab and click the Play Kubernetes YAML button. Select the YAML file Dockerfiles/hello-pod-2-with-env.yaml. Click the Play button. The Pod has started. Check the Containers tab, and you will see the three containers which are part of the Pod. Verify whether the endpoints are accessible. Shell $ curl http://localhost:8080/hello Hello Podman! $ curl http://localhost:8081/hello Hello Podman! The Pod can be stopped and deleted via Podman Desktop. Sometimes, Podman Desktop stops responding after deleting the Pod. After a restart of Podman Desktop, the Pod can be deleted without experiencing this issue. Conclusion Podman Desktop is a nice tool with some fine features. However, quite some bugs were encountered when using Podman Desktop (I did not create an issue for all of them). This might be due to the older version of Podman, which is available for Ubuntu, but then I would have expected that an incompatibility warning would be raised when starting Podman Desktop. However, it is a nice tool, and I will keep on using it for the time being.
What Is Kubernetes RBAC? Often, when organizations start their Kubernetes journey, they look up to implementing least privilege roles and proper authorization to secure their infrastructure. That’s where Kubernetes RBAC is implemented to secure Kubernetes resources such as sensitive data, including deployment details, persistent storage settings, and secrets. Kubernetes RBAC provides the ability to control who can access each API resource with what kind of access. You can use RBAC for both human (individual or group) and non-human users (service accounts) to define their types of access to various Kubernetes resources. For example, there are three different environments, Dev, Staging, and Production, which have to be given access to the team, such as developers, DevOps, SREs, App owners, and product managers. Before we get started, we would like to stress that we will treat users and service accounts as the same, from a level of abstraction- every request, either from a user or a service account, is finally an HTTP request. Yes, we understand users and service accounts (for non-human users) are different in nature in Kubernetes. How To Enable Kubernetes RBAC One can enable RBAC in Kubernetes by starting the API server with an authorization-mode flag on. Kubernetes resources used to apply RBAC on users are: Role, ClusterRole, RoleBinding, ClusterRoleBinding Service Account To manage users, Kubernetes provides an authentication mechanism, but it is usually advisable to integrate Kubernetes with your enterprise identity management for users such as Active Directory or LDAP. When it comes to non-human users (or machines or services) in a Kubernetes cluster, the concept of a Service Account comes into the picture. For example, The Kubernetes resources need to be accessed by a CD application such as Spinnaker or Argo to deploy applications, or one pod of service A needs to talk to another pod of service B. In such cases, a Service Account is used to create an account of a non-human user and specify the required authorization (using RoleBinding or ClusterRoleBinding). You can create a Service Account by creating a yaml like the below: YAML apiVersion: v1 kind: ServiceAccount metadata: name: nginx-sa spec: automountServiceAccountToken: false And then apply it. Shell $ kubectl apply -f nginx-sa.yaml serviceaccount/nginx-sa created And now you have to ServiceAccount for pods in the Deployments resource. YAML kind: Deployment metadata: name: nginx1 labels: app: nginx1 spec: replicas: 2 selector: matchLabels: app: nginx1 template: metadata: labels: app: nginx1 spec: serviceAccountName: nginx-sa containers: - name: nginx1 image: nginx ports: - containerPort: 80 In case you don’t specify about serviceAccountName in the Deployment resources, then the pods will belong to the default Service Account. Note there is a default Service Account for each namespace and one for clusters. All the default authorization policies as per the default Service Account will be applied to the pods where Service Account info is not mentioned. In the next section, we will see how to assign various permissions to a Service Account using RoleBinding and ClusterRoleBinding. Role and ClusterRole Role and ClusterRole are the Kubernetes resources used to define the list of actions a user can perform within a namespace or a cluster, respectively. In Kubernetes, the actors, such as users, groups, or ServiceAccount, are called subjects. A subject's actions, such as create, read, write, update, and delete, are called verbs. YAML apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: read-only namespace: dev-namespace rules: - apiGroups: - "" resources: ["*"] verbs: - get - list - watch In the above Role resource, we have specified that the read-only role is only applicable to the deb-ns namespace and to all the resources inside the namespace. Any ServiceAccount or users that would be bound to the read-only role can take these actions- get, list, and watch. Similarly, the ClusterRole resource will allow you to create roles pertinent to clusters. An example is given below: YAML apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: chief-role rules: - apiGroups: - "" resources: ["*"] verbs: - get - list - watch - create - update - patch - delete Any user/group/ServiceAccount bound to the chief-role will be able to take any action in the cluster. In the next section, we will see how to grant roles to subjects using RoleBinding and ClusterRoleBinding. Also, note Kubernetes allows you to configure custom roles using Role resources or use default user-facing roles such as the following: Cluster-admin: For cluster administrators, Kubernetes provides a superuser Role. The Cluster admin can perform any action on any resource in a cluster. One can use a superuser in a ClusterRoleBinding to grant full control over every resource in the cluster (and in all namespaces) or in a RoleBinding to grant full control over every resource in the respective namespace. Admin: Kubernetes provides an admin Role to permit unlimited read/write access to resources within a namespace. admin role can create roles and role bindings within a particular namespace. It does not permit write access to the namespace itself. This can be used in the RoleBinding resource. Edit: edit role grants read/write access within a given Kubernetes namespace. It cannot view or modify roles or role bindings. View: view role allows read-only access within a given namespace. It does not allow viewing or modifying of roles or role bindings. RoleBinding and ClusterRoleBinding To apply the Role to a subject (user/group/ServiceAccount), you must define a RoleBinding. This will give the user the least privileged access to required resources within the namespace with the permissions defined in the Role configuration. YAML apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: Role-binding-dev roleRef: kind: Role name: read-only #The role name you defined in the Role configuration apiGroup: rbac.authorization.k8s.io subjects: - kind: User name: Roy #The name of the user to give the role to apiGroup: rbac.authorization.k8s.io - kind: ServiceAccount name: nginx-sa#The name of the ServiceAccount to give the role to apiGroup: rbac.authorization.k8s.io Similarly, ClusterRoleBinding resources can be created to define the Role of users. Note we have used the default superuser ClusterRole reference provided by Kubernetes instead of using our custom role. This can be applied to cluster administrators. YAML apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: superuser-binding roleRef: kind: ClusterRole name: superuser apiGroup: rbac.authorization.k8s.io subjects: - kind: User name: Aditi apiGroup: rbac.authorization.k8s.io Benefits of Kubernetes RBAC The advantage of Kubernetes RBAC is it allows you to “natively” implement the least privileges to various users and machines in your cluster. The key benefits are: Proper Authorization With the least privileges to various users and Service Accounts to Kubernetes resources, DevOps and architects can implement one of the main pillars of zero trust. Organizations can reduce the risk of data breaches and data leakage and also avoid internal employees accidentally deleting or manipulating any critical resources. Separation of Duties Applying RBAC on Kubernetes resources will always facilitate separation of duties of users such as developers, DevOps, testers, SREs, etc., in an organization. For e.g., for creating/deleting a new resource in a dev environment, developers should not depend on admin. Similarly, deploying new applications into test servers and deleting the pods after testing should not be a bottleneck for DevOps or testers. Applying authorization and permissions to users such as developers and CI/CD deployment agents into respective workspaces (say namespaces or clusters) will decrease the dependencies and cut the slack. 100% Adherence to Compliance Many industry regulations, such as HIPAA, GDPR, SOX, etc., demand tight authentication and authorization mechanisms in the software field. Using Kubernetes RBAC, DevOps, and architects can quickly implement RBAC into their Kubernetes cluster and improve their posture to adhere to those standards. Disadvantages of Kubernetes RBAC For small and medium enterprises, using Kubernetes RBAC is justified, but it is not advisable to use Kubernetes RBAC for the below reasons: There can be many users and machines, and applying Kubernetes RBAC can be cumbersome to implement and maintain. Granular visibility of who performed what operation is difficult. For example, large enterprises would require information such as violations or malicious attempts against RBAC permissions.
In the rapidly evolving landscape of software development and deployment, containerization has become a cornerstone technology. Among the myriad containerization tools, Podman stands out as a lightweight, flexible, and efficient choice for macOS users. This guide is your gateway to the world of Podman, taking you through the seamless process of installing and running containers on your macOS system. Installing Podman on MacOS There are multiple avenues to bring Podman into your macOS environment. Below, we’ll explore two popular methods: using Homebrew for convenience and manual installation for those who prefer hands-on control. Method 1: Using Homebrew Homebrew, the popular macOS package manager, simplifies software installations. Let’s get started with Podman: Open your terminal: Launch your terminal to begin the installation journey. Install Homebrew: If Homebrew isn’t already installed, execute the following command /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" 3. Install Podman: Once Homebrew is up and running, simply type: brew install podman Method 2: Manual Installation For those who prefer a more hands-on approach, manual installation is the way to go: Visit Podman’s GitHub releases page: Go to the Podman GitHub releases page. Select the right MacOS version: Under the latest release, find the assets section and select the file corresponding to your macOS version. Install Podman: Once downloaded, drag the Podman desktop application to your Applications folder for a hassle-free installation. Setting up Podman With Podman successfully installed on your macOS system, let’s explore two paths for setting it up: using the intuitive desktop application or leveraging the power of the command-line interface (CLI). Using Podman Desktop Launch Podman Desktop: Find and open the Podman desktop application located in your Applications folder. Install Podman: Click on “Install Podman” within the application. It will automatically configure Podman for you. In case of errors, the CLI method is available as a backup. Using CLI To Set Up Podman In case you encounter issues with the desktop application, you can manually set up Podman using the command-line interface: Open your terminal: Fire up your terminal for some command-line magic. Execute the following commands: Copy and paste these commands to initiate the Podman setup podman machine stop podman machine rm podman machine init -v $HOME:$HOME -v /Users -v /Volumes -v /usr/local/lib/node_modules podman machine set --rootful podman machine start These commands will ensure a smooth start for your Podman desktop. Verifying Your Installation To ensure Podman is up and running without a hitch, use these commands: Check Running Containers podman ps Run a New Container (Example With NGINX) podman run -d -p 8080:80 nginx List Running Containers podman ps Stop a Container Replace <container_id> with the actual ID podman stop <container_id> Uninstalling Podman Should you ever decide to bid adieu to Podman, here’s how to do it: 1. If you installed Podman using Homebrew: Delete all Podman-related files from the /opt path of your computer. Remove any Podman-related files from the /Cellar path. 2. If you installed Podman manually: Delete all Podman-related files from the /opt path. This will ensure a thorough and clean uninstallation of all Podman-related files and directories. Conclusion Podman opens the door to effortless container and pod management on macOS. Whether you opt for the swift Homebrew installation or the hands-on manual setup, Podman empowers you with a robust containerization solution for your development and deployment needs. Dive in today and embark on your containerization journey with confidence. Happy coding!
There is no doubt that the cloud has changed the way we run our software. Startups, for instance, can get started without buying expensive hardware and scale flexibly. Also, the cloud has enabled novel solutions such as serverless, managed Kubernetes and Docker, or edge functions. For a time, cloud-native applications seemed to be the way forward for most teams, big or small. But in exchange for all this power, we pay a cost. And it can be a steep one. 37signals — the company behind HEY.com and Basecamp.com — has calculated that by buying a few servers and moving from the cloud to on-premise, they can save 7 million dollars over five years. And it is not an isolated case. Recently, Amazon Prime — the poster child of serverless architectures — moved part of its infrastructure from serverless into a monolith and cut costs by 90%. Does this mean that we should go back to bare metal servers like in the old days? Not quite. We can still enjoy many of the benefits of the cloud, like horizontal scalability and no-outage upgrades using containers in combination with an orchestration platform. On that note, 37signals recently released Kamal, a tool that allowed them to completely leave the cloud, saving them money and improving performance for their users. What Is Kamal? Kamal is a deployment tool for containerized applications. It uses Docker containers to run the software and Traefik, a reverse proxy and load manager, to perform rolling updates. The application container runs behind an instance of Traefik running as a reverse proxy. User requests are routed to the active application container. Kamal keeps things simple by: Using standard Docker images. Connecting to the servers via SSH. Giving each server a single role. The last point gives us the first clue about how Kamal works. It assumes a multi-server setup, where every machine fulfills only one role. In other words, if your application requires a database, Kamal expects to have at least two servers: one for the application and one for the database. Deploying the application and database with a single Kamal command. Kamal cares about your servers and little else. Load balancing is limited to the server level. Kamal uses Traefik to forward HTTP traffic to every container running on a machine. If you want to do horizontal scaling, you'll need to put a separate load balancer in front of everything. Kamal does not manage multi-server load balancing. You need to add your own. Because Kamal doesn't care where the application runs, you can use a cloud server, on-premise machines, or even run VMs on your laptop to test drive the tool. Getting Started With Kamal You'll need the following to build and deploy an application with Kamal: Docker. A Docker Hub account or similar. A software project with its Dockerfile. Two servers with SSH access. Note: We will need at least one server for each role. A deployment will typically need at least two machines: one for the application and one for the database. You cannot host the database on the same machine as the application. Ensure You Have SSH Access Before starting, ensure that you have SSH access to the deployment server. It can be any cloud, bare metal server, or even a VM on your laptop. What matters is that it already has your SSH key authorized for root access. Shell $ ssh root@SERVER_IP_ADDRESS If that doesn't work, but you know the root's password, you can add your key with the following command: Shell $ ssh-copy-id root@SERVER_IP_ADDRESS Installing Kamal So, now that we have our server, we can install Kamal on your local machine. Either as a Ruby Gem: Shell $ gem install kamal -v 1.0.0 Or, you can try the ready-made official Docker image by aliasing the command in your shell like this: Shell $ alias kamal="docker run -it --rm -v '${PWD}:/workdir' -v '/run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock' -e SSH_AUTH_SOCK='/run/host-services/ssh-auth.sock' -v /var/run/docker.sock:/var/run/docker.sock ghcr.io/basecamp/kamal:latest" Since Kamal is in active development and quite new, I suggest picking a version and sticking with it to prevent updates from breaking your deployments. Configuring Kamal For this next step, we will need a project to deploy. Kamal works with any codebase as long as it includes a Dockerfile. For this tutorial, I will be using the following demo project, which exposes an API-based address book with PostgreSQL for persistence: TomFern/dockerizing-nodejs In the project's root folder, run: Shell $ kamal init This will create three files: config/deploy.yml: the main configuration file. It declares how to deploy the application to all your servers. .env: for sensitive environment variables, including passwords and tokens. It should be added .gitignore so it's never checked in the repository. .kamal/hooks: contains Git-like hooks. We won't be using any of these features in this tutorial. The building block of Kamal deployments is called applications, which are containers running on one or more servers. They are connected to the Internet via Traefik, a reverse proxy. Let's configure our deployment. To do that, open deploy.yml and edit the following values near the top of the file. Uncomment the lines as needed. You will, at the very least, need to define the following: A name for the application. The server IP address. This will be your application server. The Docker image name is without the tag. Your Docker Hub username (you can also use a different registry). In the registry. password add "DOCKER_PASSWORD." This is a reference to a variable defined in .env. Environment variables for the application. The values under clear are stored in plaintext. Passwords should be stored in .env, and their variable name should be listed under secret in the config file. I'm configuring an "address book" application in the example below. Its image name is TomFern/address book (notice the lack of tags, which are handled by Kamal). My application uses a PostgreSQL database, for which I set DB_HOST to the IP address of the DB and reference the variable containing the password in .env. YAML service: addressbook image: tomfern/addressbook # Deploy to these servers. servers: - 216.238.101.228 # Credentials for your image host. registry: username: tomfern password: - DOCKER_PASSWORD # Environment variables env: clear: DB_HOST: 216.238.113.141 secret: - DB_PASSWORD Kamal can handle containerized dependencies like databases, caches, or search services. Kamal calls them accessories. If you need to define an accessory, scroll down the config file until you find the accessories section. Then, set the following values: The DB engine image name with its tag, e.g. postgres:14. The db server IP address and the database port. Any secrets or environment variables needed for database initialization. One or more volume mappings for data persistence — so data is not wiped out when the container stops. You can also define startup scripts to run during database initialization. The following snippet shows how to configure a PostgreSQL database accessory for our demo app: Shell accessories: db: image: postgres:14 host: 216.238.113.141 port: 5432 env: secret: - POSTGRES_PASSWORD volumes: - /var/lib/postgresql/data:/var/lib/postgresql/data Now, open the .env file. Fill in the password of your Docker Hub account and for the "Postgres" user in your PostgreSQL server. The value of every secret environment variable defined in deploy.yml must be set here. YAML DOCKER_PASSWORD=YOUR_DOCKER_HUB_PASSWORD POSTGRES_PASSWORD=THE_POSTGRES_ADMIN_PASSWORD DB_PASSWORD=THE_ADDRESSBOOK_APP_PASSWORD You may use the same password for POSTGRES_PASSWORD and DB_PASSWORD or create a dedicated user for the app in the database. If you do so, you'll also need to define DB_USER in the config file like this: YAML service: addressbook image: tomfern/addressbook # ... env: clear: DB_HOST: 216.238.113.141 DB_USER: my_app_username secret: - DB_PASSWORD Add a Healthcheck Route By default, Kamal checks that the application container is up by running curl on the /up route. You can change the health check endpoint in the config file. Since the demo does not have a health check route, let's add one. In app.js, add the following lines: JavaScript // app.js // .... const healthRouter = require('./routes/health'); app.use('/up', healthRouter); // ... Create a new file called routes/health.js with the following content, which checks if the app can connect with the database: JavaScript // routes/health.js const express = require('express'); const router = express.Router(); const db = require('../database'); router.get("/", function(req, res) { db.sequelize.authenticate() .then(() => res.status(200).send(JSON.stringify({ ok: true }))) .catch( err => res.status(500).send(JSON.stringify(err))); }); module.exports = router; Kamal uses curl inside the container to perform the health check, so ensure that your Dockerfile installs the tool. For example: Shell FROM node:18.16.0-alpine3.17 RUN apk update && apk add curl ... Prepare Servers and Deploy We're set to begin with the deployment. Kamal can do everything with a single command: Shell $ kamal setup Acquiring the deploy lock Ensure curl and Docker are installed... Log into image registry... Build and push app image... Ensure Traefik is running... Start container with version c439617 using a 7s readiness delay (or reboot if already running)... Releasing the deploy lock This command does the following: Installs Docker on all machines. Starts Traefik in the app server. Starts the PostgreSQL container in the database server. Builds the Docker image for your application on your laptop and uploads it to Docker Hub. Pulls the image and starts the application on the server. Routes inbound traffic into the application container. MRSK runs a health check to verify that the application is ready to work before being exposed to the Internet. Once the bill passes, Traefik routes traffic into the app's container. By default, the health check is a GET request to the /up route (expecting status code 200), but you can change that in deploy.yml. MRSK deployment mechanism. The image is built into the developer's machine, uploaded to Docker Hub, and pulled into the server. Then, a health check ensures it starts correctly. Once verified, Traefik routes traffic into the application container. We can check what containers are running with the following: Shell $ mrsk details Traefik Host: 216.238.101.228 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5d08d56b760b traefik:v2.9 "/entrypoint.sh --pr…" 2 minutes ago Up 2 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp traefik App Host: 216.238.101.228 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 28acfd2cda02 tomfern/addressbook:3ecd87430ab7ab6cc30a1542784ddb75fbfd8e74 "docker-entrypoint.s…" 2 minutes ago Up 2 minutes 3000/tcp addressbook-web-3ecd87430ab7ab6cc30a1542784ddb75fbfd8e74 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a843852686b6 postgres:14 "docker-entrypoint.s…" 2 minutes ago Up 2 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp addressbook-db As you can see, MRSK tags every built image with a unique identifier, allowing us to track changes, rolling back and forwards as needed. Before testing the application, we should run any database migration/setup scripts to initialize its contents. MRSK allows us to run commands in running containers with the following: Shell $ mrsk app exec "npm run migrate" Finally, we can test the application, which should be ready to receive user requests. Shell $ curl -X PUT http://216.238.101.228/persons -H 'Content-Type: application/json' -d '{ "id": 1, "firstName": "David", "lastName": "Bowie" }' $ curl 216.238.101.228/persons/all { "firstName": "David", "lastName": "Bowie", "id": 1, "updatedAt": "2023-04-30T22:44:29.115Z", "createdAt": "2023-04-30T22:44:29.115Z" } Deploying Updates With MRSK Let's make a change to the application so we can see how MRSK handles updates. In the demo code, we have a /persons API endpoint; it would be a good idea to version it. So, let's change this line in app.js: Shell app.use('/persons', personsRouter); Change the line so the base endpoint route is /persons/v1/. Shell app.use('/persons/v1', personsRouter); The fastest way to update the container is with, which does away with a few tasks like checking whether Docker is installed in all servers. Shell $ mrsk redeploy Acquiring the deploy lock Running docker buildx build Running docker image rm --force tomfern/addressbook:7db892 on 216.238.101.228 Running docker pull tomfern/addressbook:latest on 216.238.101.228 Health check against /up succeeded with 200 OK! Finished all in 35.8 seconds Releasing the deploy lock The redeploy command will rebuild the image, upload it to Docker Hub, and start it on the application server. As soon as the health check passes, the traffic is routed to the new version, and the old container shuts down. MRSK redeploy mechanism. A new instance of the application is built and deployed. Once its health check passes, traffic is routed to the new instance, and the old one is shut down. After deployment, we can check that the new route is working: Shell $ curl 216.238.101.228/persons/v1/all [ { "id": 1, "firstName": "David", "lastName": "Bowie", "createdAt": "2023-05-07T17:41:45.580Z", "updatedAt": "2023-05-07T17:41:45.580Z" } ] Rolling Back Updates With MRSK MRSK gives us a safe path for rolling back updates. If the new version is causing trouble, we can return to the last working version with a single command. To rollback, first, we need to find which images are available on the server with the following: Shell $ mrsk app containers App Host: 216.238.101.228 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f648fec5f604 tomfern/addressbook:66347a86f8a123e35492dd43463540c23f7db892 "docker-entrypoint.s…" 4 minutes ago Up 4 minutes 3000/tcp addressbook-web-66347a86f8a123e35492dd43463540c23f7db892 b3b1d13b8a1c 9500e07b6387 "docker-entrypoint.s…" 9 minutes ago Exited (1) 4 minutes ago addressbook-web-66347a86f8a123e35492dd43463540c23f7db892_05d41f3ba39d2b1b ce4a5c31e6fc tomfern/addressbook:f043325e3984ec245a94b21bd236afcc537a9739 "docker-entrypoint.s…" 3 hours ago Exited (1) 9 minutes ago addressbook-web-f043325e3984ec245a94b21bd236afcc537a9739 We can see that the previous version was tagged as "f043325e3984ec245a94b21bd236afcc537a9739". We can run mrsk rollback to go back to it. For example: Shell $ mrsk rollback f043325e3984ec245a94b21bd236afcc537a9739 If the container does not exist on the server — MRSK routinely prunes older images — you can always look up the last good version in Docker Hub or by checking the logs in your CI/CD platform. MRSK rollback mechanism. The old version is restarted and health-checked. Once working, traffic is switched over to the old version, and the new container stops. Once you're done with your application, you can remove everything, including the database, with mrsk remove. This will destroy the database, remove Docker, and stop all applications. MRSK’s Limitations MRSK development is ongoing, so we may expect behavior changes and new breaking features in the future. But do not mistake its newness for immaturity; 37signals has reportedly begun using it to move away from the cloud with great success already. That being said, there are a few things MRSK cannot do, and that makes it a bad fit for some use cases: The default behavior is to expose the application's HTTP port without SSL. You are supposed to set up an SSL terminator, a load balancer, or a CDN in front of the whole deployment. Containers cannot communicate between themselves within the same server. This is by design. You can always configure Docker networks server-side manually. But it's better to stick to one server per role, so putting the application and the database on the same machine is a bad idea. You must configure a firewall in front of your whole setup to ensure users can only access the web application. If you want to distribute the load among many servers, you should set up a load balancer in front of your servers. MRSK does not provide this feature. MRSK is designed for each server to have one role and does not provide load balancing or SSL termination. It would help if you handled it by yourself. At the end of the day, MRSK offers a simplified workflow — especially when compared with beasts like Kubernetes — by not trying to handle every aspect of the system. Deploying Applications With MRSK and CI/CD If you want to see an example of using MRSK in a CI/CD environment, check this tutorial. You can find all the code needed at this repository: TomFern/dockerizing-nodejs Troubleshooting If you experience issues with deployment, try the following: If the health check fails, ensure that curl is installed in the application container image. If it still fails after ensuring curl is installed, check that connectivity works between the application and the database. You may need to configure a VPS or set firewall rules. You can try starting the images manually by running the docker exec commands in the machines to see the output of the logs. These will help you find the root cause of the problem. If you get an error message stating that there is a lock, run the MRSK lock release. This can happen when MRSK fails during deployment, and the admin lock is not released. Conclusion MRSK is minimalistic to the point of elegance. Combining traditional servers with the flexibility of containers allows us to run our containerized services with ease on any server. MRSK presents a compelling solution if you're seeking to optimize your software deployments, reduce vendor lock-in, and maintain flexibility in choosing deployment environments while enjoying the benefits of cloud-native architectures.
Shift-left is an approach to software development and operations that emphasizes testing, monitoring, and automation earlier in the software development lifecycle. The goal of the shift-left approach is to prevent problems before they arise by catching them early and addressing them quickly. When you identify a scalability issue or a bug early, it is quicker and more cost-effective to resolve it. Moving inefficient code to cloud containers can be costly, as it may activate auto-scaling and increase your monthly bill. Furthermore, you will be in a state of emergency until you can identify, isolate, and fix the issue. The Problem Statement I would like to demonstrate to you a case where we managed to avert a potential issue with an application that could have caused a major issue in a production environment. I was reviewing the performance report of the UAT infrastructure following the recent application change. It was a Spring Boot microservice with MariaDB as the backend, running behind Apache reverse proxy and AWS application load balancer. The new feature was successfully integrated, and all UAT test cases are passed. However, I noticed the performance charts in the MariaDB performance dashboard deviated from pre-deployment patterns. This is the timeline of the events. On August 6th at 14:13, The application was restarted with a new Spring Boot jar file containing an embedded Tomcat. Application restarts after migration At 14:52, the query processing rate for MariaDB increased from 0.1 to 88 queries per second and then to 301 queries per second. Increase in query rate Additionally, the system CPU was elevated from 1% to 6%. Raise in CPU utilization Finally, the JVM time spent on the G1 Young Generation Garbage Collection increased from 0% to 0.1% and remained at that level. Increase in GC time on JVM The application, in its UAT phase, is abnormally issuing 300 queries/sec, which is far beyond what it was designed to do. The new feature has caused an increase in database connection, which is why the increase in queries is so drastic. However, the monitoring dashboard showed that the problematic measures were normal before the new version was deployed. The Resolution It is a Spring Boot application that uses JPA to query a MariaDB. The application is designed to run on two containers for minimal load but is expected to scale up to ten. Web - app - db topology If a single container can generate 300 queries per second, can it handle 3000 queries per second if all ten containers are operational? Can the database have enough connections to meet the needs of the other parts of the application? We had no other choice but to go back to the developer's table to inspect the changes in Git. The new change will take a few records from a table and process them. This is what we observed in the service class. List<X> findAll = this.xRepository.findAll(); No, using the findAll() method without pagination in Spring's CrudRepository is not efficient. Pagination helps to reduce the amount of time it takes to retrieve data from the database by limiting the amount of data fetched. This is what our primary RDBMS education taught us. Additionally, pagination helps to keep memory usage low to prevent the application from crashing due to an overload of data, as well as reducing the Garbage Collection effort of Java Virtual Machine, which was mentioned in the problem statement above. This test was conducted using only 2,000 records in one container. If this code were to move to production, where there are around 200,000 records in up to 10 containers, it could have caused the team a lot of stress and worry that day. The application was rebuilt with the addition of a WHERE clause to the method. List<X> findAll = this.xRepository.findAllByY(Y); The normal functioning was restored. The number of queries per second was decreased from 300 to 30, and the amount of effort put into garbage collection returned to its original level. Additionally, the system's CPU usage decreased. Query rate becomes normal Learning and Summary Anyone who works in Site Reliability Engineering (SRE) will appreciate the significance of this discovery. We were able to act upon it without having to raise a Severity 1 flag. If this flawed package had been deployed in production, it could have triggered the customer's auto-scaling threshold, resulting in new containers being launched even without an additional user load. There are three main takeaways from this story. Firstly, it is best practice to turn on an observability solution from the beginning, as it can provide a history of events that can be used to identify potential issues. Without this history, I might not have taken a 0.1% Garbage Collection percentage and 6% CPU consumption seriously, and the code could have been released into production with disastrous consequences. Expanding the scope of the monitoring solution to UAT servers helped the team to identify potential root causes and prevent problems before they occur. Secondly, performance-related test cases should exist in the testing process, and these should be reviewed by someone with experience in observability. This will ensure the functionality of the code is tested, as well as its performance. Thirdly, cloud-native performance tracking techniques are good for receiving alerts about high utilization, availability, etc. To achieve observability, you may need to have the right tools and expertise in place. Happy Coding!
Are you attempting to determine the ideal method for deploying your apps in the cloud? The two most common solutions are Serverless and Containers. But deciding which one to use might be difficult. Which one is superior? Which is more economical? Which one is simpler to manage? In this blog, we will talk about Serverless vs Containers and explain when to utilize each one. In addition to this, we will also talk about another popular option to consider — Microservices Architecture and how it fits into the picture. At the end of this post, you’ll know precisely how Containers vs Serverless stack up against one another and which one is better for your purposes. So, let’s dive into the world of Serverless vs Containers and find out which one reigns supreme! What Is Serverless? Serverless is a cloud computing model in which the cloud provider controls the infrastructure needed to run applications. With Serverless, developers don’t have to worry about maintaining infrastructure, operating systems, or servers while they write their code. Because the cloud provider dynamically distributes resources, developers only pay for the application’s actual usage and not for unused resources. Developers who use a serverless architecture divide their program into a series of small, independent functions that are called when certain conditions are met. Each function can be written in a variety of computer languages, including Python, Node.js, or Java, and is intended to carry out a particular purpose. The associated function is called when an event happens, and the cloud provider allows the resources required to carry out the function. With serverless computing, developers can create and deploy apps fast and easily without having to worry about the underlying infrastructure. It is the perfect answer for a wide range of use cases, from straightforward web apps to intricate data processing pipelines, because of its high degree of scalability, flexibility, and affordability. Serverless computing has become increasingly popular in recent years as more developers adopt this cutting-edge method of creating cloud-native applications. What Is a Container? Code, libraries, system tools, and configuration settings are all included in a Container, which is a small, standalone executable bundle of software. Containers, in contrast to typical virtual machines, share the kernel of the host machine and do not require a separate operating system. Microservices, a sort of software architecture that includes dividing an extensive program into more minor, independent services that can be developed, deployed, and managed separately, are frequently created using Containers. As each microservice is deployed in its own Container, it is simple to scale it up or down in response to demand. Containers’ mobility is one of their main benefits. Containers can be moved between environments and run reliably no matter the underlying infrastructure because they come with everything needed to run an application. This makes it simpler to create, test, and deploy apps across various cloud service providers and platforms in context with Containers vs Serverless. Overall, containers are a robust technology that has numerous advantages for the deployment and development of contemporary software. What Is Docker? A popular open-source containerization technology called Docker enables programmers to build, distribute, and operate applications in a containerized environment. Applications may be built, tested, and deployed across various environments more easily, thanks to Docker’s simplified container creation and management process. Docker’s portability is one of its main advantages. Containers can be easily moved between various environments, including development, testing, and production, without needing changes to the underlying infrastructure. This facilitates team collaboration on projects and uniform application deployment across many settings. Moreover, Docker offers a standardized method for packaging and delivering programs, which simplifies sharing and reusing code between projects. Ultimately, by offering a more streamlined and effective method of containerization, Docker has wholly changed how developers construct and deploy programs. What Are Microservices? A software development strategy known as Microservices divides big, monolithic applications into more manageable, autonomous services that collaborate to deliver an application’s overall functionality. Each Microservice in the system has its codebase, is intended to carry out a single task, and can be created, deployed, and scaled independently of the other Microservices. Software development is more agile and flexible thanks to the microservices architecture since changes may be made to individual microservices without affecting the entire program. Also, it enables teams to operate more independently on specific microservices, accelerating development and deployment timeframes. Overall, the scalability, dependability, and maintainability of complex software applications can be enhanced by the use of microservices. Now that we have a basic understanding of Serverless, Containers, Docker, and Microservices, let’s take a closer look at the Serverless Services offered by one of the leading cloud providers, AWS. What Are the Serverless Services on AWS? For developers to deploy their apps without having to worry about managing servers or infrastructure, AWS (Amazon Web Services) offers a variety of serverless services. These services are intended to make the development and scaling of apps simpler and more affordable. Here are a few of AWS’s most well-liked serverless services: AWS Lambda: An event-driven, serverless computing service that enables you to execute code for a variety of applications or backend services without the need to provision or manage servers. And you only pay for the number of resources consumed by your code. Amazon API Gateway: It is a completely managed service that simplifies the process for developers to design, deploy, and secure APIs at any level. APIs act as the primary point of entry for applications to access data, business logic, or functions from backend services. With no minimum charges or initial expenses, you are only billed for the API calls you receive and the amount of data transferred out. AWS Step Functions: A service that allows developers to create, automate, and orchestrate various AWS services to build distributed applications, workflows, and data pipelines. It offers a visual interface that makes it easy to create and manage workflows, and you only pay for the resources you use. Amazon DynamoDB: This is a NoSQL database designed to operate high-performance applications at any scale, which is completely managed and serverless. It provides built-in security, continuous backups, automated multi-region replication, in-memory caching, and data import and export tools. Furthermore, you only pay for the capacity you use, as well as any optional features you enable. Amazon S3: A highly scalable, secure, and performant object storage service. S3 offers customers from all sectors and sizes an opportunity to store and secure any amount of data. With a pay-as-you-go pricing model, customers only pay for the amount of storage they use without any minimum charges. To summarize, AWS offers an extensive range of serverless services that empower developers to create, implement, and operate applications without the need to concern themselves with managing infrastructure. And these are just a few of the many serverless services offered by AWS. Why Do You Need Lambda? You might be wondering why you need a specific service like AWS Lambda now that we’ve covered Serverless, Containers, Docker, Microservices, and AWS Serverless Services. As cloud-native apps grow in popularity, there is a higher demand for scalable, adaptable, and cost-efficient solutions. AWS Lambda is a serverless, event-driven computing service that enables developers to run code without having to worry about maintaining servers or infrastructure. Lambda is a strong, fully managed, and flexible service that offers developers several advantages so they don’t have to worry about procuring or managing servers. This frees developers from worrying about infrastructure so they can concentrate on building and deploying code. By enabling developers to run code in response to events, Lambda also makes it simple for them to create highly scalable and responsive apps. Moreover, Lambda offers a very cost-effective option because it doesn’t require any up-front payments or commitments and only costs the amount of memory you allocate, the duration of function execution, and the number of requests made. Read our article “AWS Lambda Pricing” to learn more about it. Usage Statistics and Trends in the Industry The argument between Containers and Serverless has dominated the discussion of Industry Usage Statistics and Trends in recent years. To gain insight into the situation of the industry at the moment, let’s look at it. Serverless Serverless architecture and FaaS (Function-as-a-Service) have become increasingly popular among the CNCF community over the past year. According to the 2022 CNCF annual survey, serverless architecture/FaaS usage has increased from 30% to 53%, indicating a noticeable boost in its popularity. This trend can be partly attributed to serverless advantages, which include lower development costs, a faster time to market, and scalability. The increased use of serverless computing further emphasizes the significance of cloud-native technologies and their function in modern application development. Source Containers According to the 2022 CNCF annual survey, containers have reached mainstream adoption, with 44% of respondents already using them for almost all business segments and applications. In the survey, an additional 35% of respondents said that containers are utilized for at least a few production applications. Source Serverless vs. Containers are becoming more and more popular and widely used across a range of sectors. Serverless technology is currently more widely used than containers, nevertheless. Key Differences Between Serverless vs Containers Two well-known buzzwords that have gotten a lot of popularity recently as we delve into the area of modern application development are Containers and Serverless. Both of these technologies are intended to address particular difficulties in application development, and each has its distinct advantages. While serverless is a more recent addition to the developer’s toolkit, containers have been around for a while. While there are some parallels between the two systems, they also have significant distinctions that make them more appropriate for particular purposes. To assist you in deciding which strategy is more suited for your application development needs, we’ll examine the key differences between Serverless and Containers in this section. Time to Market Serverless: Developers can concentrate on writing code rather than handling infrastructure with serverless, which reduces the time it takes to market. Containers: When deploying applications, containers take more setup time and management work. Easy To Work Serverless: Because developers do not have to handle infrastructure, serverless architectures simplify the development and deployment of applications. It enables them to concentrate more on writing code and less on responsibilities related to infrastructure. For teams who want to focus on business logic and product development rather than infrastructure administration, serverless is the best option. Containers: Applications that can be easily moved between various environments benefit from containers’ lightweight, portable runtime environment. However, managing containers can be difficult and requires a thorough understanding of the underlying technology. This limits the accessibility of containers for small teams or coders with little background in infrastructure. Scaling Serverless: With Serverless, there is no need for manual scaling the application because the cloud provider does it automatically based on usage. Additionally, it makes sure that the infrastructure is extremely resilient and available to manage failures. Containers: Scaling containers horizontally is simple, but it requires either building up a mechanism or scaling them manually. For large-scale applications, this can be time-consuming and difficult, so serverless is a preferable option if you want to automate scaling. High Availability Serverless: Because the cloud provider handles infrastructure administration and failover mechanisms, serverless architectures are highly available and resilient to failures. Containers: Containers can also be highly available, but to guarantee failover mechanisms are in place, more manual configuration and infrastructure administration are needed. For smaller teams or coders with less infrastructure expertise, this may be more difficult. Costs on the Cloud Serverless: Developers only pay for the specific resources that their applications use, as opposed to a fixed cost for the complete infrastructure, so Serverless can be more cost-effective. Containers: Regardless of usage, containers can be more expensive because they need more infrastructure management and frequently have a fixed cost for the complete infrastructure. Costs on Development Serverless: Because developers can concentrate more on writing code and less on managing infrastructure, serverless can be less costly to develop. Lower development costs and a quicker time to market may come from this. Containers: Managing and configuring additional infrastructure is necessary for containers, which can be time and money-consuming for developers. Higher development expenses and a longer time to market may follow from this. Performance Serverless: For smaller apps, serverless can deliver good performance because the cloud provider handles the underlying infrastructure and dynamically grows the resources based on demand. For larger or more complicated programs, there might be performance concerns with cold starts or other factors. Containers: On the other hand, containers need more human configuration and performance optimization, but they can deliver great performance for bigger and more complicated applications. To satisfy demand, they can also be horizontally scaled. Compatibility With Languages or Platforms Serverless: Node.js, Python, and Java are just a few of the well-known programming languages and platforms that are compatible with serverless technology. However, only a few programming languages are supported. The specifics of the serverless languages allowed differ from serverless platform to serverless platform. Containers: Developers must make sure that the application and supporting infrastructure are compatible with containers because they work with a variety of computer languages and platforms. But as long as the host server accepts the language, you can containerize an application created in any language. Vendor Lock-in Serverless: Because developers must rely on the infrastructure and services of the cloud provider, serverless designs risk vendor lock-in. Containers: Containers lower the risk of vendor lock-in because they are more flexible in terms of vendor selection and infrastructure administration. Security Serverless: Because the cloud provider handles infrastructure security and patching, serverless systems may be more secure. Developers must, however, make sure that their code is safe and adheres to best practices. Containers: Containers can also be secure, although doing so involves more human infrastructure maintenance and configuration. Developers are required to follow best practices and make sure their containers are patched. Logs Serverless: Centralized logging and monitoring are provided by serverless architectures, making it simpler for developers to monitor and examine application logs. Containers: Tracking and analyzing application logs is more challenging with containers since they require more manual configuration for logging and monitoring. Use Cases Both serverless and container technologies are well-suited for several use cases due to their adaptability. These technologies are becoming more and more well-liked and adaptable for a variety of projects as they develop and grow. Here are a few of the most common use cases where Serverless vs Containers can be implemented. Serverless Web Applications Web apps are applications that may be accessed using a web browser or other web-based interface. They are designed to fulfill a variety of functions, including e-commerce, social networking, collaboration tools, and content management systems. Handling unanticipated traffic spikes, which might be caused by sharp increases in user activity, marketing initiatives, or outside events, is one of the main issues in developing online applications. In conventional systems, this typically requires expanding the underlying infrastructure by adding more servers or computer resources, which can be time-consuming and expensive. This issue can be solved with serverless architecture that enables web applications to freely scale up or down in response to variations in demand without requiring manual intervention. This is accomplished by breaking down the application into manageable, independent functions that can be run on demand in response to events or triggers. Serverless architecture is a suitable fit for developing online apps that encounter unforeseen traffic for a few reasons: Scalability: Serverless functions are built to scale dynamically based on demand, so they can deal with unforeseen traffic spikes without degrading performance or dependability. This makes it possible for web applications to remain highly available and responsive even during moments of peak traffic. Cost-effectiveness: Serverless architecture allows you to avoid maintaining a sizable, dedicated infrastructure by just charging you for the compute resources that you need. Since you only pay for what you use, this might be a cost-effective solution for online services that encounter fluctuating traffic patterns. Agility: Serverless architecture frees developers from worrying about managing underlying infrastructure so they can concentrate on creating and deploying applications quickly. As a result, developers can experiment and test new features more quickly and with greater agility without having to worry about scaling problems. Overall, because serverless architecture enables scalable, cost-effective, and flexible development and deployment, it is an appropriate fit for developing online applications that must withstand unforeseen traffic surges. Backend ProcessingData processing, file processing, and data analysis are examples of tasks that can be time and resource-intensive, making them perfect candidates for serverless computing. Developers can create and execute these actions using serverless architecture without having to worry about managing the underlying infrastructure. Without any manual assistance, serverless functions can process massive volumes of data since they can scale automatically based on demand. The processing of vast amounts of data in a particular order or sequence is necessary for jobs like data analysis, which can benefit from this. The affordability of Serverless Computing is a major benefit for operations like data processing, file processing, and data analysis. Instead of maintaining a sizable, dedicated infrastructure, serverless architecture just requires that you pay for the computer resources that you use when calling a function. Overall, because it enables developers to handle data in batches or in real-time while being affordable and scalable, serverless computing is an appropriate fit for jobs including data processing, file processing, and data analysis. Event-Driven Applications Event-driven applications are ones that are created to react to specific events or triggers, like an incoming message or a user action. Because it enables developers to create code that is triggered by particular events or conditions without managing infrastructure, serverless computing is well-suited for creating event-driven applications. Events can be generated by a variety of sources, including databases, messaging systems, or Internet of Things (IoT) devices, in event-driven architectures. A serverless function can be triggered in response to an event to carry out a particular action or set of actions. For instance, a serverless function can be used to process a file automatically when it is uploaded to a storage bucket, such as resizing a picture or extracting content from a document. Similar to this, a serverless function can be triggered to update other systems, such as sending a message or starting a workflow, whenever a new entry is added to a database. Because serverless functions are capable of handling high numbers of events without requiring manual intervention, the serverless architecture enables event-driven applications to scale automatically in response to demand. Overall, Serverless Computing is the best option for creating event-driven applications because it enables programmers to design code that is triggered by particular events or circumstances while still being scalable and affordable. Container Application Deployment The process of developing and delivering software must include the deployment of applications. Containers have become a common method for doing so in real-world situations. How containers can be used for application deployment is explained in more detail below: Consistency: No matter what underlying infrastructure or operating system is used, containers offer a consistent environment for running applications. In other words, there won’t be compatibility issues when the same containerized application is deployed in various environments like development, testing, and production. Reliability: Applications perform more consistently and dependably when containers are used since they are designed to be isolated from the underlying infrastructure. This is so that all dependencies and libraries necessary to operate an application are packaged together in containers, making them constantly accessible and updated. Scalability: Containers are perfect for applications with fluctuating workloads or unpredictable traffic since they can be readily scaled up or down based on demand. This is because container orchestration systems, like Kubernetes or Docker Swarm, which provide automatic scaling and load balancing, may be used to deploy and manage containers. Portability: Containers are portable, making it simple to move them from one environment to another, such as from a developer’s laptop to a testing or production environment. This is because containers are made to be portable and lightweight, and they come bundled with all necessary dependencies and libraries. Overall, Containers are a consistent and dependable method for deploying applications in real-life situations. They are the perfect option for businesses wishing to streamline their application deployment process and guarantee that their applications function correctly in a variety of environments because of their consistency, dependability, scalability, and portability. Continuous Integration and Continuous Deployment (CI/CD) A Software Development practice called Continuous Integration and Continuous Deployment (CI/CD) aims to automate the entire software development process, from code changes to deployment in production environments. Containers offer a constant and dependable environment for application testing, building, and deployment, making them a great choice for CI/CD pipeline implementation. Using containers in a CI/CD pipeline has the following advantages: Consistency: By offering a consistent environment for testing, developing, and deploying programs, containers make it possible to get the same results in a variety of environments. Scalability: Because containers are easily scaled up or down to match the demands of the development process, resources are utilized effectively. Automation: Testing, building, and deployment can all be done automatically using containers. Overall, Containers offer a uniform, scalable, and automated environment for software development and deployment, making them the perfect choice for CI/CD pipeline implementation. Microservices Applications are broken down into smaller, independent services that may be created, deployed, and managed separately using the microservices architecture approach to software development. Since containers offer a lightweight and portable environment for delivering and maintaining individual microservices, they are a great way to implement a microservices architecture. There are various advantages to using containers in a microservices architecture: Independent Deployment: Each microservice can be deployed independently of the others thanks to Containers. This makes it simpler to manage and deploy microservices because changes to one microservice do not affect the others. Isolation: Containers offer isolation between separate microservices, preventing issues or failures in one microservice from affecting the others. Consistency: By offering a consistent environment for microservice deployment and management, containers make it possible to obtain the same results in a variety of environments. Scalability: Because containers can readily be scaled up or down to match the needs of specific microservices, it is simpler to manage variable workloads across various services. Legacy Application Modernization Modernizing legacy applications involves modifying or moving them to newer platforms or technologies to increase their functionality, performance, and scalability. Because containers offer a flexible and scalable environment for deploying and maintaining programs, they can be used in the modernization of legacy applications. There are various advantages to using containers for legacy application modernization: Performance enhancement: Containers offer a portable and light environment for deploying applications, which can enhance the performance of legacy applications. Increased agility: Containers make it simpler to manage and deploy legacy programs, making it simpler to integrate updates and enhancements to the application. Cost-effective: Because containers offer a flexible and scalable environment for delivering and maintaining programs, they can lower the cost of updating legacy systems. Containers, in general, are a great way to modernize legacy applications since they can increase the performance, agility, and scalability of legacy applications, making it simpler to manage and update them over time. Components of Serverless Architecture An environment for designing, deploying, and managing serverless applications often comprises several components that function in harmony. The following are the main components of a serverless environment: Cloud Provider: The infrastructure and services required to operate serverless applications are provided by a cloud provider, such as Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure. Functions as a Service: The foundation of serverless architecture is FaaS. FaaS enables programmers to create small, dedicated functions that run in response to events like API calls or data changes. Event Sources: Sources produce events that launch serverless functions. Databases, message queuing systems, and HTTP requests are some examples of event sources. API Gateway: The entry point for all incoming requests to the serverless application is an API Gateway. Client HTTP requests are received by it, which then sends them to the proper downstream services. Databases: Serverless apps often manage and store data using NoSQL databases like DynamoDB. Monitoring and logging: Monitoring and logging tools are used to keep an eye on the performance and general well-being of serverless applications, to spot difficulties early, and to resolve issues later. Security: Serverless security entails safeguarding the application code, making sure that access controls are in place, and guarding against common security threats like SQL injection and cross-site scripting (XSS). Components of Container Architecture An environment for containers often consists of several parts that work together to create a platform for developing, deploying, and managing containerized applications. A container environment’s essential elements are as follows: Container runtime: The software that is used to manage and run containers is known as container runtime. The container runtime maintains the container lifecycle, offers an isolated environment for running applications in containers, and makes sure that containers have access to the resources they require. Container images: A container image is a small, standalone package that includes the application code, dependencies, and configurations necessary to run a containerized application. A container registry, like Docker Hub or AWS Elastic Container Registry(ECR), is often where container images are kept. Container storage: Data can be stored and accessed by containers thanks to container storage. Local volumes and network-attached storage (NAS) are common components of container storage solutions. Container monitoring: Monitoring of containers gives insight into the functionality and state of containerized applications. Typically, metrics like CPU and memory consumption, network traffic, and application logs are collected by container monitoring applications. Container security: In every container environment, security is essential. Container security entails protecting container runtime environments, container images, and the isolation of containers from one another. Access restrictions, vulnerability scanning, and encryption are frequently seen in container security features. Container orchestrator: An automated system for managing, scaling, and deploying containerized applications is known as a container orchestrator. The container orchestrators like Kubernetes, Docker Swarm, and Amazon EKS or ECS are among examples. When NOT To Use Serverless? Although serverless architecture has grown to be well-liked and quite useful, there are still some circumstances in which it may not be the ideal fit. Here are some cases when you might want to think about using serverless alternatives instead: Long-Running Functions A situation where serverless might not be the ideal choice is for long-running functions. Serverless functions are not appropriate for lengthy processes that need persistent state or continuous computation because they are stateless and event-driven by design. You might need to adopt an option like Containers, which can offer more control over the environment and allow long-running processes if your application requires functions to continue for a long period. Also, serverless functions have a maximum runtime restriction that might not be enough for your requirements. Moreover, long-running processes on a serverless platform may cost more. Use an Unsupported Language If you need to utilize a programming language that is not supported, that is another reason not to use serverless. While most serverless platforms support many widely used programming languages, including Node.js, Python, and Java, some languages or frameworks might not be supported. This can make it more difficult for you to utilize the framework or language of your choice, pushing you to either use one that is supported instead or choose another cloud computing service with more freedom. Risk of Vendor Lock-in Serverless solutions depend on the infrastructure and services offered by cloud providers, making vendor lock-in a possible risk. Switching to a different provider or platform can be challenging and time-consuming. As a result, you can find yourself dependent on one vendor and unable to transition to a different one, even if the latter would be more affordable or offer superior services. Hence, you might want to think about substituting alternatives that provide more flexibility and mobility if avoiding vendor lock-in is a goal. In the end, your choice to adopt serverless should be based on the particular needs of your application. Despite its many advantages, it might not always be the best option. When NOT To Use Containers? Even though containers are an effective technology with numerous advantages, there are some situations in which they might not be the ideal option. You might not want to choose containers in the following circumstances: Large Monolithic Applications The typical purpose of containers is to operate a single process or application in a separate environment. It might not be a good idea to containerize a huge, monolithic application with lots of components. Low Resource Environments System resources like CPU, RAM, and storage may be needed in substantial amounts for containers to operate well. Running containers may be overly resource-intensive and have a severe impact on performance in low-resource environments, such as embedded systems or IoT devices. Moreover, managing and scaling containerized applications successfully may be difficult in low-resource environments because they might not have the infrastructure required to support container orchestration systems. Desktop Applications In general, desktop apps shouldn’t use containers. Containers are designed to execute isolated applications in a server environment, as opposed to desktop applications, which are normally installed and run directly on the user’s computer. Desktop apps can be challenging to bundle and distribute using containers, and there may be issues depending on the user’s hardware and operating system. Small and Simple Applications The overhead of containerization may outweigh the advantages for small and basic applications. It might be easier and more effective to run the program directly on the host operating system. Ultimately, even though containers are a powerful technology, it’s crucial to take your unique use case and requirements into account before determining whether to adopt them. When NOT To Use Microservices? Although microservices have many advantages, they might not always be the best option for every project. These are some scenarios in which using microservices may not be a good idea: Small and Simple Applications A monolithic design rather than a microservices architecture can be more suitable if your application is small and reasonably straightforward. A tiny application’s use of a microservices architecture may result in extra complexity and overhead. Tight Budgets If you’re on a tight budget, microservices might not be the greatest choice because creating and deploying them can be more expensive than using a monolithic architecture. Small and Inexperienced Development Teams It may be difficult to properly implement microservices if your team is tiny and inexperienced with this architecture, as developing and deploying microservices demands a high level of competence and coordination. Low Complexity Applications A monolithic design might be adequate if the complexity requirements for your application are low. Using a microservices architecture for simpler applications could result in extra complexity because it is intended for handling complex applications. Legacy Applications It could be challenging to incorporate a legacy system into a microservices architecture for you, which could cause compatibility problems and add to the complexity. Therefore, before determining whether to deploy microservices, it is imperative to carefully evaluate the requirements of your project and balance their advantages and disadvantages. Summary Now, let’s summarise the differences between Serverless and Containers in the following table by keeping in mind that each technology has its strengths and weaknesses, and the decision of which one to use ultimately depends on the specific needs and requirements of the project. Category Serverless Containers Time to Market Faster due to reduced infrastructure management Slower due to more setup and management work Easy to work Simplifies development and deployment Portable but difficult to manage and require expertise Scaling Automatically scales based on usage Can be scaled horizontally but requires manual effort High Availability Extremely resilient and available to manage failures Resilient but requires manual failover mechanisms Costs on the Cloud More cost-effective due to the pay-as-you-go model More expensive due to fixed infrastructure costs Costs on Development Less costly due to reduced infrastructure management More expensive due to additional infrastructure management Performance Good performance for smaller apps, but may have concerns Great performance for larger and more complicated apps Compatibility Supports specific programming languages and platforms Compatible with any language supported by a host server Vendor lock-in Risks vendor lock-in due to reliance on the cloud provider Lower risk due to flexibility in vendor selection Security More secure due to cloud provider handling infrastructure Can be secure with proper maintenance and configuration Logs Provides centralized logging and monitoring More challenging to track and analyze application logs Conclusion When picking the ideal architecture for your application, there is no one-size-fits-all approach. Serverless vs. Containers and microservices are all potent technologies. Each has specific benefits and drawbacks. Your project requirements, such as application complexity, budget, team skills, and integration with existing systems, should be the basis for your choice between Serverless vs. Containers. The trade-offs between scalability, adaptability, and maintenance costs must be taken into account while deciding between Serverless vs. Containers. Serverless might not be the ideal option if your application needs long-running functions or unsupported languages. Microservices might not be the most cost-effective option if you have a tiny or simple application, a limited budget, or a small and inexperienced development team. Containers might not be the best option if your application is a desktop application, a massive monolithic system, or has limited resources. It’s vital to keep in mind that choosing between Serverless vs. Containers for your application is a serious choice that shouldn’t be made lightly. Regardless of the architecture you select, at ClickIT, we are available to help you with its implementation. Our team of professionals can help you through the procedure and make sure that your application is developed and deployed successfully, freeing you up to concentrate on your main business goals.
Yitaek Hwang
Software Engineer,
NYDIG
Abhishek Gupta
Principal Developer Advocate,
AWS
Alan Hohn
Director, Software Strategy,
Lockheed Martin
Marija Naumovska
Product Manager,
Microtica