A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
Exploring Apache Ignite With Spring Boot
Backend For Frontend (BFF) Pattern
Upgrading to Jakarta EE 9 or newer from an older version of Jakarta EE or Java EE can be a bit tricky with the javax to jakarta prefix change. Some libraries may be still using the javax package, which can cause conflicts when trying to run your applications on a Jakarta EE server like Eclipse GlassFish 7. You will likely encounter the same issues when upgrading to Spring Framework 6 or Spring Boot 3, Quarkus 3, and newer versions of many other frameworks, which now depend on Jakarta EE 9 APIs. But don’t worry, I’ve got you covered! In this post, I’ll explain everything you need to know to upgrade to Jakarta EE 9+ successfully and in almost no time. By Jakarta EE 9+, I mean either Jakarta EE 9 or 10, which is currently the latest version of Jakarta EE. Existing Tools To Automate Upgrade Steps Fortunately, many of the challenges can be automated using free and open-source tools like Openrewrite, WindUp, and Eclipse Transformer. Openrewrite is a powerful tool that can automatically make changes to your application's source code, such as updating all references to the old javax packages with the new jakarta prefix. Eclipse Transformer can do the same job, but it can also transform the final JAR, WAR, or EAR binary files in case your project is not ready for source code transformation yet. This is useful if you just want to try running your older application on a Jakarta EE 9+ runtime. All these tools can save you time and effort when upgrading to Jakarta EE 9+, allowing you to focus on other important aspects of your application's development. However, it's still important to review and test the changes made by these tools to ensure that they don't introduce any unintended consequences. Transform Applications With Eclipse Transformer I recommend starting every migration with Eclipse Transformer so that you can quickly verify whether your application can be migrated to a Jakarta EE 10 runtime without changes or identify potential problems for the migration. Using Eclipse Transformer might also be essential in the later stages of your migration, especially if your application relies on dependencies that are not yet compatible with Jakarta EE 9+ or if it’s risky to upgrade them to a compatible version. In the end, transforming your application with Eclipse Transformer before you deploy it to a Jakarta EE 9+ runtime can work as a safety net, detecting and transforming any remaining classes and files that depend on older API packages. You can keep using Eclipse Transformed until you’re absolutely sure everything in your application has been migrated. To demonstrate the ease of using Eclipse Transformer, we at OmniFish have prepared a sample application project at javax-jakarta-transform-whole-war. This project showcases how to apply Eclipse Transformer to the build of a Maven project. The key ingredient to applying the transformation in a Maven project is the Maven plugin transformer-maven-plugin. This plugin can be installed as an extension, and then it can hook into the standard Maven build mechanism and modify the final build artifact. The Maven plugin provides two goals: jar – this goal transforms application packages, like JAR, WAR, and EAR files. It can either modify them in place or create their transformed version in a separate file transform – this goal transforms directories (before they are packed into JAR, WAR, or other application packages). It can be used on exploded directories before they are deployed to an app server The example application uses both plugin goals. The jar goal is used to transform the final artifact file, which is required to deploy the file later or deploy it to a Maven repository. The transform goal is optional and is used to transform the exploded directory, which is often used during development by various IDEs to deploy the application. Using the transform goal is optional but makes it easier to develop the application and deploy it easily from your IDE until it is fully migrated to Jakarta EE 9 and Eclipse Transformer is needed no more. Besides using it as a Maven plugin, Eclipse Transformer can be also used on the command line and thus with any project or a continuous integration system. To run Eclipse Transformer on the command line, first download the org.eclipse.transformer.cli distribution JAR file from Maven Central. Then, unpack this JAR, for example, into a folder named transformer. Then you can transform your application file, for example, jakarta-ee-8-app.war, into a new application file jakarta-ee-10-app-transformed.war with the following command: java -jar transformer/org.eclipse.transformer.cli-0.5.0.jar jakarta-ee-8-app.war jakarta-ee-10-app-transformed.war The file to transform doesn't have to be a WAR file. It can be a directory, WAR, JAR, and many other package types that the Eclipse Transformer supports. You can find more information on how to use Eclipse Transformer on the command line on the Github project. However, in most cases, you wouldn’t need anything else; the tool does a very good job, even with the default options. Transform Dependencies Incompatible With jakarta Prefix Now, you'll need to upgrade or transform individual libraries used by your application. This solves two problems. First, it improves the build time of your application during development by removing the step to transform the final binary after each build. Second, it solves compilation problems you can face with some libraries after you adjust the source code of your application for Jakarta EE 9. Libraries that have a version compatible with Jakarta EE 9 can be simply updated to this version. Most of the libraries widely used in enterprise projects already support Jakarta EE 9. However, some libraries maintain support for both Jakarta EE 9+ and older Jakarta EE and Java EE versions. Those libraries have two variants for the same library version. You'll need to choose the variant that supports Jakarta EE 9+. If you use Maven, then, in most cases, you'll need to use the jakarta classifier like this: <dependency> <groupId>org.primefaces</groupId> <artifactId>primefaces</artifactId> <version>13.0.0</version> <classifier>jakarta</classifier> </dependency> But it's not the case for every library. Some libraries provide the Jakarta EE 9 compatible Maven artifact under completely different coordinates, and you'll need to do some research to find them. When it’s not possible to upgrade a library, you can transform individual libraries with the Eclipse Transformer using a similar technique to transforming the whole application WAR, which we explained in a previous article. You can use the Eclipse Transformer also on individual library JARs and then use the transformed JARs during the build. However, in modern Maven or Gradle-based projects, this isn’t straightforward because of transitive dependencies. There’s currently no tooling that would properly transform all the transitive dependencies automatically and install them correctly to a local repository. But you can use a trick – you can merge all JARs that need to be transformed into a single JAR (Uber JAR) with all the transitive dependencies, then transform it, and then install this single JAR into a Maven repository. Then, you’ll only need to change the application POM file to depend on this single artifact instead of depending on all the individual artifacts that were transformed. You’ll need to create a new Maven project, e.g., transform-depenendencies, next to our existing project. Then move all the dependencies you need to transform into this new project. Then remove all those dependencies from the original project and replace them with a single dependency on the new transform-depenendencies project. In the final WAR file, instead of having each JAR file separately in the WAR, like this: WEB-INF classes jasperreports.jar quartz.jar We will end up with a single transformed JAR like this: WEB-INF classes transform-dependencies.jar This transform-dependencies.jar file will contain all the artifacts merged into it – it will contain all classes and files from all the artifacts. In order to achieve this, we can use the Maven Shade plugin, which merges multiple JAR files into a single artifact produced by the project: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>3.5.0</version> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <shadedClassifierName>jakarta</shadedClassifierName> <shadedArtifactAttached>true</shadedArtifactAttached> <transformers> <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/> <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"/> </transformers> </configuration> </execution> </executions> </plugin> This plugin takes all the dependencies defined in the project, merges them into a single Uber JAR, and attaches the JAR to the project as an artifact with the jakarta classifier. Now we add the Transformer plugin to transform the Uber JAR to make it compatible with Jakarta EE 9+. We need to configure the Transformer plugin with the following: execute the goal “jar” use the “jakartaDefaults” rule to apply transformations for Jakarta EE 9 define artifact with the classfier “jakarta” produced by the shade maven plugin. This will have the same groupId, artifactId and version as the current project You'll need to build the transform-dependencies project only once with mvn install to transform the dependencies into a single artifact in your local Maven repository. When you build the original application project, it will use the transformed Uber JAR and add it to the final WAR instead of all individual (untransformed) JARs. Of course, this will likely introduce compilation failures because your source code still uses the Javax prefix. There's an easy solution to fix that, which I'll describe in the next section. Transform Application Source Code Yes, you could fix the compilation errors in your source code manually or with some simple find-and-replace mechanism. But I don't recommend that. Not all packages with the “javax.” prefix should be transformed; resource files like XML descriptors also need to be transformed, etc. It's much easier and more bulletproof to use automation tools designed to transform code for Jakarta EE 9. There are two free tools that can help us here: Eclipse Transformer – we used it earlier to transform the final application, but it’s also capable of transforming Java source files and resources Openrewrite – a tool that can automatically make changes to your application’s source code based on specified rules. It contains rules for Jakarta EE 9. I recommend using Eclipse Transformer also for this step because it has a more complete set of transformation rules for Jakarta EE 9 than Openrewrite and can transform some resource files that Openrewrite ignores. However, the end result is very similar even with Openrewrite. You would just need to make a few additional manual changes to the source code if you use it instead. I’ll describe how to use both of these tools so that you can choose which of them you’d like to use. For a single application, you’ll need to transform the source code only once. It’s not worth changing the project configuration just for this single step. Therefore we’ll describe how to use Eclipse Transformer on the command line to do this without adding any Maven plugin or configuration to your project. This also means that you can apply this procedure if you use some other build tool other than Maven, e.g., Gradle. Follow these steps to transform the source code of your application: Download Eclipse Transformer CLI distribution artifact from Maven Central. Download the distribution.jar file of the latest version, not the plain jar file. For the version 0.5.0, here’s the direct download link: org.eclipse.transformer.cli-0.5.0-distribution.jar Unpack the JAR file into some directory, e.g. /path/to/transformer Go to your application’s project directory. Run the following to transform the src directory (with source code, resources, test code and resources, etc…) to output_src directory: java -jar /path/to/transformer/org.eclipse.transformer.cli-0.5.0.jar src Move the contents of the output_src directory into src (overwrite files). On Linux and Mac OS, you can use RSync command line tool: rsync -a output_src/* src Edit pom.xmlmanually because the Eclipse Transformer doesn't do this: Increase the Jakarta EE version to 9.0.0 or 10.0.0, so that your project depends on the following dependency (or an equivalent Web Profile or Core Profile dependency): <dependency> <groupId>jakarta.platform</groupId> <artifactId>jakarta.jakartaee-api</artifactId> <version>10.0.0</version> <scope>provided</scope></dependency> As an alternative to using Eclipse Transformer, you can use Openrewrite to transform your application source code to be compatible with Jakarta EE 9. Applying OpenRewrite is very easy. If you already have Maven installed, you can just use it as a Maven plugin configured on the command line (note that your project doesn't have to be Maven-based, Maven is only required to run the transformation): Go to the application’s project directory Execute the following command: mvn -U org.openrewrite.maven:rewrite-maven-plugin:run \-Drewrite.recipeArtifactCoordinates=org.openrewrite.recipe:rewrite-migrate-java:LATEST \-Drewrite.activeRecipes=org.openrewrite.java.migrate.jakarta.JavaxMigrationToJakarta This will modify your application in place. Theoretically, it should be possible to build it now, and all should work as expected. But, based on my experience, I had to apply a few fixes in a Maven-based WAR project in pom.xml, web.xml, and Java annotations, which I didn't need to do after using the Eclipse Transformer. Conclusion After performing these steps, your application should be completely compatible with Jakarta EE 9. You can remove the transformation of your final application to save build time if all the non-transformed dependencies are compatible with Jakarta EE 9. We needed to do these three steps: (optional step) Transform the final application using Eclipse Transformer and test if it runs with Jakarta EE 9+ server or framework. Upgrade dependencies to versions compatible with Jakarta EE 9. Transform dependencies using Eclipse Transformer if they can't be upgraded to Jakarta EE 9. Transform the application source code using Eclipse Transformer or OpenRewrite. (optional step) Remove transformation of the final application. After all these steps are done, you should be able to build your application now. It should compile successfully and work well if you deploy it on a Jakarta EE 9+ server or run it with your framework of choice. Your project is fully transformed into Jakarta EE 9+, and you can continue working on it as before, as if it was designed for Jakarta EE 9+ since the beginning.
Did you know you can containerize your Spring Boot applications to start up in milliseconds, without compromising on throughput, memory, development-production parity, or Java language features? And with little or no refactoring of the application code? Here’s how with Open Liberty 23.0.0.10-beta. Liberty Instanton The InstantOn capability in the Open Liberty runtime uses the IBM Semeru JDK and a Linux technology called Checkpoint/Restore in Userspace (CRIU) to take a checkpoint, or a point-in-time snapshot, of the application process. This checkpoint can then be restored very quickly to bring the application process back into the state it was in when the checkpoint was taken. The application can be restored multiple times because Open Liberty and the Semeru JDK preserve the uniqueness of each restored process in containers. Each restored application process runs without first having to go through the whole startup sequence, saving up to 90% of startup time (depending on your application). InstantOn requires very little modification of your Java application to make this improvement happen. For more information about Liberty InstantOn, see the How to package your cloud-native Java application for rapid Startup blog and Faster startup for containerized applications with Open Liberty InstantOn in the Open Liberty docs. Spring Boot Support for Checkpoint/Restore The Spring Framework version 6.1 release will integrate with JVM checkpoint/restore by using the org. crac project to allow capable systems to reduce the startup times of Spring-based Java applications. With Liberty InstantOn 23.0.0.10-beta, you can configure a new crac-1.3 feature to provide an implementation of the org. crac API that integrates with Liberty InstantOn. This allows Spring-based applications, including Spring Boot applications, to be deployed with Liberty InstantOn to achieve rapid startup times. Production-Ready Liberty Container Images New Universal Base Image container images are uploaded to the IBM Container Registry for each new release of Liberty. Starting with the 23.0.0.6 Liberty release, the Liberty UBI container images include the necessary prerequisites to checkpoint your applications with Liberty InstantOn. And now, starting with the 23.0.0.10-beta release, the UBI beta container image also includes the prerequisites to checkpoint your Spring Boot 3.2-based applications. This beta release includes an implementation of the org. crac APIs with the Liberty beta feature crac-1.3. The crack-1.3 feature, along with the Spring Framework version 6.1 support for org. crack, allows you to checkpoint your Spring-based applications with Liberty InstantOn to achieve rapid startup times. The Liberty container images make it easy to develop InstantOn applications that are ready to deploy into production. An important benefit of using Liberty InstantOn is the ability to do a checkpoint of the application process inside the container without requiring the root user to run the application process in the container. It is important, from a security perspective, to avoid running the application process in the container as the root user. This allows you to deploy your InstantOn container images to existing Kubernetes services like AWS EKS and Azure EKS. Spring Boot 3.2.0 Example Using Liberty Instanton This example uses the Containerizing, packaging, and running of a Spring Boot application Open Liberty guide to demonstrate using Liberty InstantOn with a Spring Boot 3.2.0-based application. The fastest way to get up and running is to clone the cracSpringBoot branch from the guide's GitHub repository: Plain Text git clone --branch cracSpringBoot https://github.com/openliberty/guide-spring-boot.git cd guide-spring-boot/finish The cracSpringBoot branch updates the guide to use the Open Liberty beta and Spring Boot version 3.2.0-M1, which contains the initial support of org. crack for Spring Boot applications. To build and run the example Spring Boot application, you must be using Java 17 or higher. After cloning the cracSpringBoot branch of the guide's repository, the first step is to build the Spring Boot application. Build the application by running the mine command provided with the project finish/ folder: Plain Text ./mvnw package After building the Spring Boot application, the next step is to containerize it. We focus here on changes to containerize the application with Liberty InstantOn. If you want more information about how to best containerize Spring Boot applications with Open Liberty, read through the Containerizing, packaging, and Running a Spring Boot application guide. To build the application container image with InstantOn, you must be able to either run a privileged container or grant the container image build engine the necessary Linux capabilities to do the checkpoint. Enabling the crac-1.3 Liberty Feature Liberty is composed of features that you enable according to the requirements of your application. To use Liberty's implementation of org. crack, you must enable the crac-1.3 feature in the Liberty configuration. For this example, we can do that by copying the src/main/liberty/config/crac.xml file into the container image with the following Dockerfile command: Dockerfile COPY src/main/liberty/config/crac.xml /config/configDropins/defaults The crac.xml Liberty configuration file enables the crac-1.3 feature with the following content: XML <?xml version="1.0" encoding="UTF-8"?> <server description="Enable the org.crac API"> <featureManager> <feature>crac-1.3</feature> </featureManager> </server> Building With Podman With Podman, the container image build engine can be granted the necessary Linux capabilities to checkpoint the application during the container image build. This allows you to run the checkpoint.sh script as a RUN instruction, as specified in the Dockerfile. postman file. This is the last instruction of the Dockerfile. postman file, as shown in the following example: Dockerfile ... RUN configure.sh RUN checkpoint.sh afterAppStart To grant the necessary capabilities to the Podman container image build engine, run the following command as root or with sudo: Shell sudo podman build \ -f Dockerfile.podman \ -t springboot \ --cap-add=CHECKPOINT_RESTORE \ --cap-add=SYS_PTRACE\ --cap-add=SETPCAP \ --security-opt seccomp=unconfined . You can run the previous command or run the provided scripts/build-instanton-postman.sh script to build the application container image. During the build, the last thing done is to run the checkpoint.sh script by using the afterAppStart option. This causes the checkpoint to happen after the application is started. You see the following output when the application has started: Plain Text [AUDIT ] CWWKZ0001I: Application thin-guide-spring-boot-0.1.0 started in 3.880 seconds. [AUDIT ] CWWKC0451I: A server checkpoint "afterAppStart" was requested. When the checkpoint completes, the server stops. 2023-09-06T21:06:18.763Z DEBUG 118 --- [ecutor-thread-1] o.s.c.support.DefaultLifecycleProcessor : Stopping Spring-managed lifecycle beans before JVM checkpoint 2023-09-06T21:06:18.767Z DEBUG 118 --- [ecutor-thread-1] o.s.c.support.DefaultLifecycleProcessor : Stopping beans in phase 2147483647 2023-09-06T21:06:18.768Z DEBUG 118 --- [ecutor-thread-1] o.s.c.support.DefaultLifecycleProcessor : Bean 'applicationTaskExecutor' completed its stop procedure 2023-09-06T21:06:18.769Z DEBUG 118 --- [ecutor-thread-1] o.s.c.support.DefaultLifecycleProcessor : Stopping beans in phase 2147482623 2023-09-06T21:06:18.771Z DEBUG 118 --- [ecutor-thread-1] o.s.c.support.DefaultLifecycleProcessor : Bean 'webServerGracefulShutdown' completed its stop procedure 2023-09-06T21:06:18.771Z DEBUG 118 --- [ecutor-thread-1] o.s.c.support.DefaultLifecycleProcessor : Stopping beans in phase 2147481599 2023-09-06T21:06:18.796Z DEBUG 118 --- [ecutor-thread-1] o.s.c.support.DefaultLifecycleProcessor : Bean 'webServerStartStop' completed its stop procedure 2023-09-06T21:06:18.796Z DEBUG 118 --- [ecutor-thread-1] o.s.c.support.DefaultLifecycleProcessor : Stopping beans in phase -2147483647 2023-09-06T21:06:18.797Z DEBUG 118 --- [ecutor-thread-1] o.s.c.support.DefaultLifecycleProcessor : Bean 'springBootLoggingLifecycle' completed its stop procedure [2/2] COMMIT springboot The debug output from the Spring Framework shows the Lifecycle beans in the application were stopped to prepare for the checkpoint. At this point, you have an application container image called springboot that can be run to restore the application process. Building With Docker At this time, Docker does not allow you to grant the container image build engine the Linux capabilities necessary to perform an application checkpoint. This prevents you from running the checkpoint.sh script doing the docker build command. Instead, you need to use a three-step approach: Build the application container image without the InstantOn layer. Run the application container to perform a checkpoint of the application. Commit the stopped container with the checkpoint process data into an InstantOn application container image. Complete these three build steps by running the scripts/build-instanton-docker.sh script. The resulting output is similar to the checkpoint during the Podman build. You will notice some debug output from the Spring Framework for the lifecycle beans. At this point, you have an application container image called springboot that can be run to restore the application process. Run the Instanton Spring Boot Application Both Podman and Docker can use the same options to run the Springboot InstantOn application: Plain Text [sudo podman or docker] run \ --rm \ -p 9080:9080 \ --cap-add=CHECKPOINT_RESTORE \ --cap-add=SETPCAP \ --security-opt seccomp=unconfined \ springboot You can run the previous command or run the provided scripts/run-instanton-postman.sh or scripts/run-instanton-docker.sh script to run the application container image. You see the following output when the application process is restored: Plain Text [AUDIT ] Launching defaultServer (Open Liberty 23.0.0.10-beta/wlp-1.0.81.cl230920230904-1158) on Eclipse OpenJ9 VM, version 17.0.7+7 (en_US) 2023-09-07T15:22:52.683Z INFO 118 --- [ecutor-thread-1] o.s.c.support.DefaultLifecycleProcessor : Restarting Spring-managed lifecycle beans after JVM restore 2023-09-07T15:22:52.684Z DEBUG 118 --- [ecutor-thread-1] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase -2147483647 2023-09-07T15:22:52.684Z DEBUG 118 --- [ecutor-thread-1] o.s.c.support.DefaultLifecycleProcessor : Successfully started bean 'springBootLoggingLifecycle' 2023-09-07T15:22:52.685Z DEBUG 118 --- [ecutor-thread-1] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147481599 [AUDIT ] CWWKT0016I: Web application available (default_host): http://e93ebe585ce3:9080/ 2023-09-07T15:22:52.759Z INFO 118 --- [ecutor-thread-1] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 106109 ms 2023-09-07T15:22:52.762Z DEBUG 118 --- [ecutor-thread-1] o.s.c.support.DefaultLifecycleProcessor : Successfully started bean 'webServerStartStop' 2023-09-07T15:22:52.763Z DEBUG 118 --- [ecutor-thread-1] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147482623 2023-09-07T15:22:52.763Z DEBUG 118 --- [ecutor-thread-1] o.s.c.support.DefaultLifecycleProcessor : Successfully started bean 'webServerGracefulShutdown' 2023-09-07T15:22:52.763Z DEBUG 118 --- [ecutor-thread-1] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647 2023-09-07T15:22:52.763Z DEBUG 118 --- [ecutor-thread-1] o.s.c.support.DefaultLifecycleProcessor : Successfully started bean 'applicationTaskExecutor' 2023-09-07T15:22:52.764Z INFO 118 --- [ecutor-thread-1] o.s.c.support.DefaultLifecycleProcessor : Spring-managed lifecycle restart completed in 80 ms [AUDIT ] CWWKC0452I: The Liberty server process resumed operation from a checkpoint in 0.263 seconds. [AUDIT ] CWWKZ0001I: Application thin-guide-spring-boot-0.1.0 started in 0.265 seconds. [AUDIT ] CWWKF0012I: The server installed the following features: [crac-1.3, expressionLanguage-5.0, pages-3.1, servlet-6.0, springBoot-3.0, ssl-1.0, transportSecurity-1.0, websocket-2.1]. [AUDIT ] CWWKF0011I: The defaultServer server is ready to run a smarter planet. The defaultServer server started in 0.277 seconds. Notice the last message ... server started in 0.277 seconds. The 0.277-second startup time includes the time it took for Criu to restore the Java process as well as the Liberty runtime to restore the runtime state such that it can safely run the application once restored. Additional debug messages are enabled for the Spring Framework to show the default Spring lifecycle processor restoring the lifecycle beans in the application. This is a greater than 10x improvement in startup time when compared to the original startup time of 5.5+ seconds when not using InstantOn. Summary Liberty InstantOn can be applied to applications using open standards, such as Jakarta EE and MicroProfile, as well as Spring-based applications using the latest versions of Spring Boot and Spring Framework that have support for org. crack. InstantOn application images will be ready to deploy into existing public clouds, such as AWS EKS and Azure AKS platforms.
I remember the first time I saw a demonstration of Ruby on Rails. With very little effort, demonstrators created a full-stack web application that could be used for real business purposes. I was impressed – especially when I thought about how much time it took me to deliver similar solutions using the Seam and Struts frameworks. Ruby was created in 1993 to be an easy-to-use scripting language that also included object-oriented features. Ruby on Rails took things to the next level in the mid 2000s – arriving at the right time to become the tech-of-choice for the initial startup efforts of Twitter, Shopify, GitHub, and Airbnb. I began to ask the question, “Is it possible to have a product, like Ruby on Rails, without needing to worry about the infrastructure or underlying data tier?” That’s when I discovered the Zipper platform. About Zipper Zipper is a platform for building web services using simple TypeScript functions. You use Zipper to create applets (not related to Java, though they share the same name), which are then built and deployed on Zipper’s platform. The coolest thing about Zipper is that it lets you focus on coding your solution using TypeScript, and you don’t need to worry about anything else. Zipper takes care of: User interface Infrastructure to host your solution Persistence layer APIs to interact with your applet Authentication Although the platform is currently in beta, it’s open for consumers to use. At the time I wrote this article, there were four templates in place to help new adopters get started: Hello World – a basic applet to get you started CRUD Template – offers a ToDo list where items can be created, viewed, updated, and deleted Slack App Template – provides an example on how to interact with the Slack service AI-Generated Code – expresses your solution in human language and lets AI create an applet for you There is also a gallery on the Zipper platform that provides applets that can be forked in the same manner as Git-based repositories. I thought I would put the Zipper platform to the test and create a ballot applet. HOA Ballot Use Case The homeowner’s association (HOA) concept started to gain momentum in the United States back in the 20th century. Subdivisions formed HOAs to handle things like the care of common areas and for establishing rules and guidelines for residents. Their goal is to maintain the subdivision’s quality of living as a whole, long after the home builder has finished development. HOAs often hold elections to allow homeowners to vote on the candidate they feel best matches their views and perspectives. In fact, last year I published an article on how an HOA ballot could be created using Web3 technologies. For this article, I wanted to take the same approach using Zipper. Ballot Requirements The requirements for the ballot applet are: As a ballot owner, I need the ability to create a list of candidates for the ballot. As a ballot owner, I need the ability to create a list of registered voters. As a voter, I need the ability to view the list of candidates. As a voter, I need the ability to cast one vote for a single candidate. As a voter, I need the ability to see a current tally of votes that have been cast for each candidate. Additionally, I thought some stretch goals would be nice too: As a ballot owner, I need the ability to clear all candidates. As a ballot owner, I need the ability to clear all voters. As a ballot owner, I need the ability to set a title for the ballot. As a ballot owner, I need the ability to set a subtitle for the ballot. Designing the Ballot Applet To start working on the Zipper platform, I navigated to Zipper's website and clicked the Sign In button. Next, I selected an authentication source: Once logged in, I used the Create Applet button from the dashboard to create a new applet: A unique name is generated, but that can be changed to better identify your use case. For now, I left all the defaults the same and pushed the Next button – which allowed me to select from four different templates for applet creation. I started with the CRUD template because it provides a solid example of how the common create, view, update, and delete flows work on the Zipper platform. Once the code was created, the screen appears as shown below: With a fully functional applet in place, we can now update the code to meet the HOA ballot use requirements. Establish Core Elements For the ballot applet, the first thing I wanted to do was update the types.ts file as shown below: TypeScript export type Candidate = { id: string; name: string; votes: number; }; export type Voter = { email: string; name: string; voted: boolean; }; I wanted to establish constant values for the ballot title and subtitle within a new file called constants.ts: TypeScript export class Constants { static readonly BALLOT_TITLE = "Sample Ballot"; static readonly BALLOT_SUBTITLE = "Sample Ballot Subtitle"; }; To allow only the ballot owner to make changes to the ballot, I used the Secrets tab for the applet to create an owner secret with the value of my email address. Then I introduced a common.ts file which contained the validateRequest() function: TypeScript export function validateRequest(context: Zipper.HandlerContext) { if (context.userInfo?.email !== Deno.env.get('owner')) { return ( <> <Markdown> {`### Error: You are not authorized to perform this action`} </Markdown> </> ); } }; This way I could pass in the context to this function to make sure only the value in the owner secret would be allowed to make changes to the ballot and voters. Establishing Candidates After understanding how the ToDo item was created in the original CRUD applet, I was able to introduce the create-candidate.ts file as shown below: TypeScript import { Candidate } from "./types.ts"; import { validateRequest } from "./common.ts"; type Input = { name: string; }; export async function handler({ name }: Input, context: Zipper.HandlerContext) { validateRequest(context); const candidates = (await Zipper.storage.get<Candidate[]>("candidates")) || []; const newCandidate: Candidate = { id: crypto.randomUUID(), name: name, votes: 0, }; candidates.push(newCandidate); await Zipper.storage.set("candidates", candidates); return newCandidate; } For this use case, we just need to provide a candidate name, but the Candidate object contains a unique ID and the number of votes received. While here, I went ahead and wrote the delete-all-candidates.ts file, which removes all candidates from the key/value data store: TypeScript import { validateRequest } from "./common.ts"; type Input = { force: boolean; }; export async function handler( { force }: Input, context: Zipper.HandlerContext ) { validateRequest(context); if (force) { await Zipper.storage.set("candidates", []); } } At this point, I used the Preview functionality to create Candidate A, Candidate B, and Candidate C: Registering Voters With the ballot ready, I needed the ability to register voters for the ballot. So I added a create-voter.ts file with the following content: TypeScript import { Voter } from "./types.ts"; import { validateRequest } from "./common.ts"; type Input = { email: string; name: string; }; export async function handler( { email, name }: Input, context: Zipper.HandlerContext ) { validateRequest(context); const voters = (await Zipper.storage.get<Voter[]>("voters")) || []; const newVoter: Voter = { email: email, name: name, voted: false, }; voters.push(newVoter); await Zipper.storage.set("voters", voters); return newVoter; } To register a voter, I decided to provide inputs for email address and name. There is also a boolean property called voted which will be used to enforce the vote-only-once rule. Like before, I went ahead and created the delete-all-voters.ts file: TypeScript import { validateRequest } from "./common.ts"; type Input = { force: boolean; }; export async function handler( { force }: Input, context: Zipper.HandlerContext ) { validateRequest(context); if (force) { await Zipper.storage.set("voters", []); } } Now that we were ready to register some voters, I registered myself as a voter for the ballot: Creating the Ballot The last thing I needed to do was establish the ballot. This involved updating the main.ts as shown below: TypeScript import { Constants } from "./constants.ts"; import { Candidate, Voter } from "./types.ts"; type Input = { email: string; }; export async function handler({ email }: Input) { const voters = (await Zipper.storage.get<Voter[]>("voters")) || []; const voter = voters.find((v) => v.email == email); const candidates = (await Zipper.storage.get<Candidate[]>("candidates")) || []; if (email && voter && candidates.length > 0) { return { candidates: candidates.map((candidate) => { return { Candidate: candidate.name, Votes: candidate.votes, actions: [ Zipper.Action.create({ actionType: "button", showAs: "refresh", path: "vote", text: `Vote for ${candidate.name}`, isDisabled: voter.voted, inputs: { candidateId: candidate.id, voterId: voter.email, }, }), ], }; }), }; } else if (!email) { <> <h4>Error:</h4> <p> You must provide a valid email address in order to vote for this ballot. </p> </>; } else if (!voter) { return ( <> <h4>Invalid Email Address:</h4> <p> The email address provided ({email}) is not authorized to vote for this ballot. </p> </> ); } else { return ( <> <h4>Ballot Not Ready:</h4> <p>No candidates have been configured for this ballot.</p> <p>Please try again later.</p> </> ); } } export const config: Zipper.HandlerConfig = { description: { title: Constants.BALLOT_TITLE, subtitle: Constants.BALLOT_SUBTITLE, }, }; I added the following validations as part of the processing logic: The email property must be included or else a “You must provide a valid email address in order to vote for this ballot” message will be displayed. The email value provided must match a registered voter or else a “The email address provided is not authorized to vote for this ballot” message will be displayed. There must be at least one candidate to vote on or else a “No candidates have been configured for this ballot” message will be displayed. If the registered voter has already voted, the voting buttons will be disabled for all candidates on the ballot. The main.ts file contains a button for each candidate, all of which call the vote.ts file, displayed below: TypeScript import { Candidate, Voter } from "./types.ts"; type Input = { candidateId: string; voterId: string; }; export async function handler({ candidateId, voterId }: Input) { const candidates = (await Zipper.storage.get<Candidate[]>("candidates")) || []; const candidate = candidates.find((c) => c.id == candidateId); const candidateIndex = candidates.findIndex(c => c.id == candidateId); const voters = (await Zipper.storage.get<Voter[]>("voters")) || []; const voter = voters.find((v) => v.email == voterId); const voterIndex = voters.findIndex(v => v.email == voterId); if (candidate && voter) { candidate.votes++; candidates[candidateIndex] = candidate; voter.voted = true; voters[voterIndex] = voter; await Zipper.storage.set("candidates", candidates); await Zipper.storage.set("voters", voters); return `${voter.name} successfully voted for ${candidate.name}`; } return `Could not vote. candidate=${ candidate }, voter=${ voter }`; } At this point, the ballot applet was ready for use. HOA Ballot In Action For each registered voter, I would send them an email with a link similar to what is listed below: https://squeeking-echoing-cricket.zipper.run/run/main.ts?email=some.email@example.com The link would be customized to provide the appropriate email address for the email query parameter. Clicking the link runs the main.ts file and passes in the email parameter, avoiding the need for the registered voter to have to type in their email address. The ballot appears as shown below: I decided to cast my vote for Candidate B. Once I pushed the button, the ballot was updated as shown: The number of votes for Candidate B increased by one, and all of the voting buttons were disabled. Success! Conclusion Looking back on the requirements for the ballot applet, I realized I was able to meet all of the criteria, including the stretch goals in about two hours—and this included having a UI, infrastructure, and deployment. The best part of this experience was that 100% of my time was focused on building my solution, and I didn’t need to spend any time dealing with infrastructure or even the persistence store. My readers may recall that I have been focused on the following mission statement, which I feel can apply to any IT professional: “Focus your time on delivering features/functionality that extends the value of your intellectual property. Leverage frameworks, products, and services for everything else.” - J. Vester The Zipper platform adheres to my personal mission statement 100%. In fact, they have been able to take things a step further than Ruby on Rails did, because I don’t have to worry about where my service will run or what data store I will need to configure. Using the applet approach, my ballot is already deployed and ready for use. If you are interested in giving applets a try, simply login to zipper.dev and start building. Currently, using the Zipper platform is free. Give the AI-Generated Code template a try, as it is really cool to provide a paragraph of what you want to build and see how closely the resulting applet matches what you have in mind. If you want to give my ballot applet a try, it is also available to fork in the Zipper gallery. Have a really great day!
If you use Spring WebFlux, you probably want your requests to be more resilient. In this case, we can just use the retries that come packaged with the WebFlux library. There are various cases that we can take into account: Too many requests to the server An internal server error Unexpected format Server timeout We would make a test case for those using MockWebServer. We will add the WebFlux and the MockWebServer to a project: XML <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-webflux</artifactId> <version>2.7.15</version> </dependency> <dependency> <groupId>com.squareup.okhttp3</groupId> <artifactId>mockwebserver</artifactId> <version>4.11.0</version> <scope>test</scope> </dependency> <dependency> <groupId>io.projectreactor</groupId> <artifactId>reactor-test</artifactId> <scope>test</scope> <version>3.5.9</version> </dependency> Let’s check the scenario of too many requests on the server. In this scenario, our request fails because the server will not fulfill it. The server is still functional however and on another request, chances are we shall receive a proper response. Java import okhttp3.mockwebserver.MockResponse; import okhttp3.mockwebserver.MockWebServer; import okhttp3.mockwebserver.SocketPolicy; import org.junit.jupiter.api.Test; import org.springframework.web.reactive.function.client.WebClient; import reactor.core.publisher.Mono; import reactor.test.StepVerifier; import java.io.IOException; import java.time.Duration; import java.util.concurrent.TimeUnit; class WebFluxRetry { @Test void testTooManyRequests() throws IOException { MockWebServer server = new MockWebServer(); MockResponse tooManyRequests = new MockResponse() .setBody("Too Many Requests") .setResponseCode(429); MockResponse successfulRequests = new MockResponse() .setBody("successful"); server.enqueue(tooManyRequests); server.enqueue(tooManyRequests); server.enqueue(successfulRequests); server.start(); WebClient webClient = WebClient.builder() .baseUrl("http://" + server.getHostName() + ":" + server.getPort()) .build(); Mono<String> result = webClient.get() .retrieve() .bodyToMono(String.class) .retry(2); StepVerifier.create(result) .expectNextMatches(s -> s.equals("successful")) .verifyComplete(); server.shutdown(); } } We used the mock server in order to enqueue requests. Essentially the requests we placed on the mock server will be enqueued and consumed every time we do a request. The first two responses would be failed 429 responses from the server. Let’s check the case of 5xx responses. A 5xx can be caused by various reasons. Usually, if we face a 5xx, there is probably a problem in the server codebase. However, in some cases, 5xx might come as a result of an unstable service that regularly restarts. Also, a server might be deployed in an availability zone that faces network issues; it can even be a failed rollout that is not fully in effect. In this case, a retry makes sense. By retrying, the request will be routed to the next server behind the load balancer. We will try a request that has a bad status: Java @Test void test5xxResponse() throws IOException { MockWebServer server = new MockWebServer(); MockResponse tooManyRequests = new MockResponse() .setBody("Server Error") .setResponseCode(500); MockResponse successfulRequests = new MockResponse() .setBody("successful"); server.enqueue(tooManyRequests); server.enqueue(tooManyRequests); server.enqueue(successfulRequests); server.start(); WebClient webClient = WebClient.builder() .baseUrl("http://" + server.getHostName() + ":" + server.getPort()) .build(); Mono<String> result = webClient.get() .retrieve() .bodyToMono(String.class) .retry(2); StepVerifier.create(result) .expectNextMatches(s -> s.equals("successful")) .verifyComplete(); server.shutdown(); } Also, a response with the wrong format is possible to happen if an application goes haywire: Java @Data @AllArgsConstructor @NoArgsConstructor private static class UsernameResponse { private String username; } @Test void badFormat() throws IOException { MockWebServer server = new MockWebServer(); MockResponse tooManyRequests = new MockResponse() .setBody("Plain text"); MockResponse successfulRequests = new MockResponse() .setBody("{\"username\":\"test\"}") .setHeader("Content-Type","application/json"); server.enqueue(tooManyRequests); server.enqueue(tooManyRequests); server.enqueue(successfulRequests); server.start(); WebClient webClient = WebClient.builder() .baseUrl("http://" + server.getHostName() + ":" + server.getPort()) .build(); Mono<UsernameResponse> result = webClient.get() .retrieve() .bodyToMono(UsernameResponse.class) .retry(2); StepVerifier.create(result) .expectNextMatches(s -> s.getUsername().equals("test")) .verifyComplete(); server.shutdown(); } If we break it down, we created two responses in plain text format. Those responses would be rejected since they cannot be mapped to the UsernameResponse object. Thanks to the retries we managed to get a successful response. Our last request would tackle the case of a timeout: Java @Test void badTimeout() throws IOException { MockWebServer server = new MockWebServer(); MockResponse dealayedResponse= new MockResponse() .setBody("Plain text") .setSocketPolicy(SocketPolicy.DISCONNECT_DURING_RESPONSE_BODY) .setBodyDelay(10000, TimeUnit.MILLISECONDS); MockResponse successfulRequests = new MockResponse() .setBody("successful"); server.enqueue(dealayedResponse); server.enqueue(successfulRequests); server.start(); WebClient webClient = WebClient.builder() .baseUrl("http://" + server.getHostName() + ":" + server.getPort()) .build(); Mono<String> result = webClient.get() .retrieve() .bodyToMono(String.class) .timeout(Duration.ofMillis(5_000)) .retry(1); StepVerifier.create(result) .expectNextMatches(s -> s.equals("successful")) .verifyComplete(); server.shutdown(); } That’s it. Thanks to retries, our codebase was able to recover from failures and become more resilient. Also, we used MockWebServer, which can be very handy for simulating these scenarios.
If you’re anything like me, you’ve noticed the massive boom in AI technology. It promises to disrupt not just software engineering but every industry. THEY’RE COMING FOR US!!! Just kidding ;P I’ve been bettering my understanding of what these tools are and how they work, and decided to create a tutorial series for web developers to learn how to incorporate AI technology into web apps. In this series, we’ll learn how to integrate OpenAI‘s AI services into an application built with Qwik, a JavaScript framework focused on the concept of resumability (this will be relevant to understand later). Here’s what the series outline looks like: Intro and Setup Your First AI Prompt Streaming Responses How Does AI Work Prompt Engineering AI-Generated Images Security and Reliability Deploying We’ll get into the specifics of OpenAI and Qwik where it makes sense, but I will mostly focus on general-purpose knowledge, tooling, and implementations that should apply to whatever framework or toolchain you are using. We’ll be working as closely to fundamentals as we can, and I’ll point out which parts are unique to this app. Here’s a little sneak preview. I thought it would be cool to build an app that takes two opponents and uses AI to determine who would win in a hypothetical fight. It provides some explanation and the option to create an AI-generated image. Sometimes the results come out a little wonky, but that’s what makes it fun. I hope you’re excited to get started because in this first post, we are mostly going to work on… Boilerplate :/ Prerequisites Before we start building anything, we have to cover a couple of prerequisites. Qwik is a JavaScript framework, so we will have to have Node.js (and NPM) installed. You can download the most recent version, but anything above version v16.8 should work. I’ll be using version 20. Next, we’ll also need an OpenAI account to have access to their API. At the end of the series, we will deploy our applications to a VPS (Virtual Private Server). The steps we follow should be the same regardless of what provider you choose. I’ll be using Akamai’s cloud computing services (formerly Linode). Setting Up the Qwik App Assuming we have the prerequisites out of the way, we can open a command line terminal and run the command: npm create qwik@latest. This will run the Qwik CLI that will help us bootstrap our application. It will ask you a series of configuration questions, and then generate the project for you. Here’s what my answers looked like: If everything works, open up the project and start exploring. Inside the project folder, you’ll notice some important files and folders: /src: Contains all application business logic /src/components: Contains reusable components to build our app with /src/routes: Responsible for Qwik’s file-based routing; Each folder represents a route (can be a page or API endpoint). To make a page, drop a index.{jsx|tsx} file in the route’s folder. /src/root.tsx: This file exports the root component responsible for generating the HTML document root. Start Development Qwik uses Vite as a bundler, which is convenient because Vite has a built-in development server. It supports running our application locally, and updating the browser when files change. To start the development server, we can open our project in a terminal and execute the command npm run dev. With the dev server running, you can open the browser and head to http://localhost:5173 and you should see a very basic app. Any time we make changes to our app, we should see those changes reflected almost immediately in the browser. Add Styling This project won’t focus too much on styling, so this section is totally optional if you want to do your own thing. To keep things simple, I’ll use Tailwind. The Qwik CLI makes it easy to add the necessary changes, by executing the terminal command, npm run qwik add. This will prompt you with several available Qwik plugins to choose from. You can use your arrow keys to move down to the Tailwind plugin and press Enter. Then it will show you the changes it will make to your codebase and ask for confirmation. As long as it looks good, you can hit Enter, once again. For my projects, I also like to have a consistent theme, so I keep a file in my GitHub to copy and paste styles from. Obviously, if you want your own theme, you can ignore this step, but if you want your project to look as amazing as mine, copy the styles from this file on GitHub into the /src/global.css file. You can replace the old styles, but leave the Tailwind directives in place. Prepare Homepage The last thing we’ll do today to get the project to a good starting point is make some changes to the homepage. This means making changes to /src/routes/index.tsx. By default, this file starts out with some very basic text and an example for modifying the HTML <head> by exporting a head variable. The changes I want to make include: Removing the head export Removing all text except the <h1>; Feel free to add your own page title text. Adding some Tailwind classes to center the content and make the <h1> larger Wrapping the content with a <main> tag to make it more semantic Adding Tailwind classes to the <main> tag to add some padding and center the contents These are all minor changes that aren’t strictly necessary, but I think they will provide a nice starting point for building out our app in the next post. Here’s what the file looks like after my changes. import { component$ } from "@builder.io/qwik"; export default component$(() => { return ( <main class="max-w-4xl mx-auto p-4"> <h1 class="text-6xl">Hi [wave emoji]</h1> </main> ); }); And in the browser, it looks like this: Conclusion That’s all we’ll cover today. Again, this post was mostly focused on getting the boilerplate stuff out of the way so that the next post can be dedicated to integrating OpenAI’s API into our project. With that in mind, I encourage you to take a moment to think about some AI app ideas that you might want to build. There will be a lot of flexibility for you to put your own spin on things. I’m excited to see what you come up with, and if you would like to explore the code in more detail, I’ll post it on my GitHub account.
For many full-stack developers, the combination of Spring Boot and React has become a staple in building dynamic business applications. Yet, while powerful, this pairing has its set of challenges. From type-related errors to collaboration hurdles, developers often find themselves navigating a maze of everyday issues. Enter Hilla, a framework that aims to simplify this landscape. If Hilla hasn't crossed your radar yet, this article will provide an overview of what it offers and how it can potentially streamline your development process when working with Spring Boot and React. Spring Boot, React, and Hilla For full-stack developers, the combination of Java on the backend and React (with TypeScript) on the frontend offers a compelling blend of reliability and dynamism. Java, renowned for its robust type system, ensures data behaves predictably, catching potential errors at compile-time. Meanwhile, TypeScript brings a similar layer of type safety to the JavaScript world, enhancing React's capabilities and ensuring components handle data as expected. However, while both Java and TypeScript offer individual type-safe havens, there's often a missing link: ensuring that this type-safety is consistent from the backend all the way to the frontend. This is where the benefits of Hilla shine, enabling End-to-End Type Safety Direct Communication Between React and Spring Services Consistent Data Validation and Type Safety End-To-End Type Safety Hilla takes type safety a step further by ensuring it spans the entire development spectrum. Developers spend less time perusing API documentation and more time coding. With automatically generated TypeScript services and data types, Hilla allows developers to explore APIs directly within their IDE. This seamless integration means that if any code is altered, whether on the frontend or backend, any inconsistencies will trigger a compile-time error, ensuring that issues are caught early and rectified. Direct Communication Between React and Spring Services With Hilla, the cumbersome process of managing endpoints or deciphering complex queries becomes a thing of the past. Developers can directly call Spring Boot services from their React client, receiving precisely what's needed. This is achieved by making a Spring @Service available to the browser using Hilla's @BrowserCallable annotation. This direct communication streamlines data exchange, ensuring that the frontend gets exactly what it expects without any unnecessary overhead. Here's how it works. First, you add @BrowserCallable annotation to your Spring Service: Java @BrowserCallable @Service public class CustomerService { public List<Customer> getCustomers() { // Fetch customers from DB or API } } Based on this annotation, Hilla auto-generates TypeScript types and clients that enable calling the Java backend in a type-checkable way from the frontend (no need to declare any REST endpoints): TypeScript function CustomerList() { // Customer type is automatically generated by Hilla const [customers, setCustomers] = useState<Customer[]>([]); useEffect(() => { CustomerService.getCustomers().then(setCustomers); }, []); return ( <ComboBox items={customers} ></ComboBox> ) } Consistent Data Validation and Type Safety One of the standout features of Hilla is its ability to maintain data validation consistency across the stack. By defining data validation rules once on the backend, Hilla auto-generates TypeScript validations for the frontend. This not only enhances developer productivity but also ensures that data remains consistent, regardless of where it's being processed. For instance, if a field is marked as @NotBlank in Java, Hilla ensures that the same validation is applied when this data is processed in the React frontend. Java public class Customer { @NotBlank(message = "Name is mandatory") private String name; @NotBlank(message = "Email is mandatory") @Email private String email; // Getters and setters } The Hilla useForm hook uses the generated TypeScript model to apply the validation rules to the form fields. TypeScript function CustomerForm() { const {model, field, read, submit} = useForm(CustomerModel, { onSubmit: CustomerService.saveCustomer }); return ( <div> <TextField label="Name" {...field(model.name)} /> <EmailField label="Email" {...field(model.email)} /> <Button onClick={submit}>Save</Button> </div> ) } Batteries and Guardrails Included Hilla streamlines full-stack development by offering pre-built tools, enhancing real-time capabilities, prioritizing security, and ensuring long-term adaptability. The framework provides a set of pre-built UI components designed specifically for data-intensive applications. These components range from data grids and forms to various select components and editors. Moreover, for those looking to implement real-time features, Hilla simplifies the process with its support for reactive endpoints, removing the need for manual WebSocket management. Another notable aspect of Hilla is its security integration. By default, it's connected with Spring Security, offering robust access control mechanisms to safeguard data exchanges. The framework's stateless design ensures that as user demands increase, the application remains efficient. Hilla's design approach not only streamlines the current development process but also future-proofs your app. It ensures that all components integrate seamlessly, making updates, especially transitioning from one version to another, straightforward and hassle-free. In Conclusion Navigating the complexities of full-stack development in Spring Boot and React can be complex. This article highlighted how Hilla can alleviate many of these challenges. From ensuring seamless type safety to simplifying real-time integrations and bolstering security, Hilla stands out as a comprehensive solution. Its forward-thinking design ensures that as the tech landscape evolves, your applications remain adaptable and updates remain straightforward. For those immersed in the world of Spring Boot and React, considering Hilla might be a step in the right direction. It's more than just a framework; it's a pathway to streamlined and future-ready development.
In this article, we’ll explain how to use Ansible to build and deploy a Quarkus application. Quarkus is an exciting, lightweight Java development framework designed for cloud and Kubernetes deployments, and Red Hat Ansible Automation Platform is one of the most popular automation tools and a star product from Red Hat. Set Up Your Ansible Environment Before discussing how to automate a Quarkus application deployment using Ansible, we need to ensure the prerequisites are in place. First, you have to install Ansible on your development environment. On a Fedora or a Red Hat Enterprise Linux machine, this is achieved easily by utilizing the dnf package manager: Shell $ dnf install ansible-core The only other requirement is to install the Ansible collection dedicated to Quarkus: Shell $ ansible-galaxy collection install middleware_automation.quarkus This is all you need to prepare the Ansible control machine (the name given to the machine executing Ansible). Generally, the control node is used to set up other systems that are designated under the name targets. For the purpose of this tutorial, and for simplicity's sake, we are going to utilize the same system for both the control node and our (only) target. This will make it easier to reproduce the content of this article on a single development machine. Note that you don’t need to set up any kind of Java development environment, because the Ansible collection will take care of that. The Ansible collection dedicated to Quarkus is a community project, and it’s not supported by Red Hat. However, both Quarkus and Ansible are Red Hat products and thus fully supported. The Quarkus collection might be supported at some point in the future but is not at the time of the writing of this article. Inventory File Before we can execute Ansible, we need to provide the tool with an inventory of the targets. There are many ways to achieve that, but the simplest solution for a tutorial such as this one is to write up an inventory file of our own. As mentioned above, we are going to use the same host for both the controller and the target, so the inventory file has only one host. Here again, for simplicity's sake, this machine is going to be the localhost: Shell $ cat inventory [all] localhost ansible_connection=local Refer to the Ansible documentation for more information on Ansible inventory. Build and Deploy the App With Ansible For this demonstration, we are going to utilize one of the sample applications provided as part of the Quarkus quick starts project. We will use Ansible to build and deploy the Getting Started application. All we need to provide to Ansible is the application name, repository URL, and the destination folder, where to deploy the application on the target. Because of the directory structure of the Quarkus quick start, containing several projects, we'll also need to specify the directory containing the source code: Shell $ ansible-playbook -i inventory middleware_automation.quarkus.playbook \ -e app_name='optaplanner-quickstart' \ -e quarkus_app_source_folder='optaplanner-quickstart' \ -e quarkus_path_to_folder_to_deploy=/opt/optplanner \ -e quarkus_app_repo_url='https://github.com/quarkusio/quarkus-quickstarts.git' Below is the output of this command: PLAY [Build and deploy a Quarkus app using Ansible] **************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Build the Quarkus from https://github.com/quarkusio/quarkus-quickstarts.git.] *** TASK [middleware_automation.quarkus.quarkus : Ensure required parameters are provided.] *** ok: [localhost] TASK [middleware_automation.quarkus.quarkus : Define path to mvnw script.] ***** ok: [localhost] TASK [middleware_automation.quarkus.quarkus : Ensure that builder host localhost has appropriate JDK installed: java-17-openjdk] *** changed: [localhost] TASK [middleware_automation.quarkus.quarkus : Delete previous workdir (if requested).] *** ok: [localhost] TASK [middleware_automation.quarkus.quarkus : Ensure app workdir exists: /tmp/workdir] *** changed: [localhost] TASK [middleware_automation.quarkus.quarkus : Checkout the application source code.] *** changed: [localhost] TASK [middleware_automation.quarkus.quarkus : Build the App using Maven] ******* ok: [localhost] TASK [middleware_automation.quarkus.quarkus : Display build application log] *** skipping: [localhost] TASK [Deploy Quarkus app on target.] ******************************************* TASK [middleware_automation.quarkus.quarkus : Ensure required parameters are provided.] *** ok: [localhost] TASK [middleware_automation.quarkus.quarkus : Ensure requirements on target system are fullfilled.] *** included: /root/.ansible/collections/ansible_collections/middleware_automation/quarkus/roles/quarkus/tasks/deploy/prereqs.yml for localhost TASK [middleware_automation.quarkus.quarkus : Ensure required OpenJDK is installed on target.] *** skipping: [localhost] TASK [middleware_automation.quarkus.quarkus : Ensure Quarkus system group exists on target system] *** changed: [localhost] TASK [middleware_automation.quarkus.quarkus : Ensure Quarkus user exists on target system.] *** changed: [localhost] TASK [middleware_automation.quarkus.quarkus : Ensure deployement directory exits: /opt/optplanner.] *** changed: [localhost] TASK [middleware_automation.quarkus.quarkus : Set Quarkus app source dir (if not defined).] *** ok: [localhost] TASK [middleware_automation.quarkus.quarkus : Deploy application as a systemd service on target system.] *** included: /root/.ansible/collections/ansible_collections/middleware_automation/quarkus/roles/quarkus/tasks/deploy/service.yml for localhost TASK [middleware_automation.quarkus.quarkus : Deploy application from to target system] *** ok: [localhost] TASK [middleware_automation.quarkus.quarkus : Deploy Systemd configuration for Quarkus app] *** changed: [localhost] TASK [middleware_automation.quarkus.quarkus : Perform daemon-reload to ensure the changes are picked up] *** ok: [localhost] TASK [middleware_automation.quarkus.quarkus : Ensure Quarkus app service is running.] *** changed: [localhost] TASK [middleware_automation.quarkus.quarkus : Ensure firewalld configuration is appropriate (if requested).] *** skipping: [localhost] PLAY RECAP ********************************************************************* localhost : ok=19 changed=8 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 As you can see, the Ansible collection for Quarkus does all the heavy lifting for us: its content takes care of checking out the source code from GitHub and builds the application. It also ensures the system used for this step has the required OpenJDK installed on the target machine. Once the application is successfully built, the collection takes care of the deployment. Here again, it checks that the appropriate OpenJDK is available on the target system. Then, it verifies that the required user and group exist on the target and if not, creates them. This is recommended mostly to be able to run the Quarkus application with a regular user, rather than with the root account. With those requirements in place, the jars produced during the build phase are copied over to the target, along with the required configuration for the application integration into systemd as a service. Any change to the systemd configuration requires reloading its daemon, which the collection ensures will happen whenever it is needed. With all of that in place, the collection starts the service itself. Validate the Execution Results Let’s take a minute to verify that all went well and that the service is indeed running: Shell # systemctl status optaplanner-quickstart.service ● optaplanner-quickstart.service - A Quarkus service named optaplanner-quickstart Loaded: loaded (/usr/lib/systemd/system/optaplanner-quickstart.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2023-04-26 09:40:13 UTC; 3h 19min ago Main PID: 934 (java) CGroup: /system.slice/optaplanner-quickstart.service └─934 /usr/bin/java -jar /opt/optplanner/quarkus-run.jar Apr 26 09:40:13 be44b3acb1f3 systemd[1]: Started A Quarkus service named optaplanner-quickstart. Apr 26 09:40:14 be44b3acb1f3 java[934]: __ ____ __ _____ ___ __ ____ ______ Apr 26 09:40:14 be44b3acb1f3 java[934]: --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ Apr 26 09:40:14 be44b3acb1f3 java[934]: -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ Apr 26 09:40:14 be44b3acb1f3 java[934]: --\___\_\____/_/ |_/_/|_/_/|_|\____/___/ Apr 26 09:40:14 be44b3acb1f3 java[934]: 2023-04-26 09:40:14,843 INFO [io.quarkus] (main) optaplanner-quickstart 1.0.0-SNAPSHOT on JVM (powered by Quarkus 2.16.6.Final) started in 1.468s. Listening on: http://0.0.0.0:8080 Apr 26 09:40:14 be44b3acb1f3 java[934]: 2023-04-26 09:40:14,848 INFO [io.quarkus] (main) Profile prod activated. Apr 26 09:40:14 be44b3acb1f3 java[934]: 2023-04-26 09:40:14,848 INFO [io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm, hibernate-orm-panache, hibernate-orm-rest-data-panache, jdbc-h2, narayana-jta, optaplanner, optaplanner-jackson, resteasy-reactive, resteasy-reactive-jackson, resteasy-reactive-links, smallrye-context-propagation, vertx, webjars-locator] Having the service running is certainly good, but it does not guarantee by itself that the application is available. To double-check, we can simply confirm the accessibility of the application by connecting to it: PowerShell # curl -I http://localhost:8080/ HTTP/1.1 200 OK accept-ranges: bytes content-length: 8533 cache-control: public, immutable, max-age=86400 last-modified: Wed, 26 Apr 2023 10:00:18 GMT date: Wed, 26 Apr 2023 13:00:19 GMT Writing up a Playbook The default playbook provided with the Ansible collection for Quarkus is quite handy and allows you to bootstrap your automation with a single command. However, most likely, you’ll need to write your own playbook so you can add the automation required around the deployment of your Quarkus app. Here is the content of the playbook provided with the collection that you can simply use as a base for your own: YAML --- - name: "Build and deploy a Quarkus app using Ansible" hosts: all gather_facts: false vars: quarkus_app_repo_url: 'https://github.com/quarkusio/quarkus-quickstarts.git' app_name: optaplanner-quickstart' quarkus_app_source_folder: 'optaplanner-quickstart' quarkus_path_to_folder_to_deploy: '/opt/optaplanner' pre_tasks: - name: "Build the Quarkus from {{ quarkus_app_repo_url }." ansible.builtin.include_role: name: quarkus tasks_from: build.yml tasks: - name: "Deploy Quarkus app on target." ansible.builtin.include_role: name: quarkus tasks_from: deploy.yml To run this playbook, you again use the ansible-playbook command, but providing the path to the playbook: Shell $ ansible-playbook -i inventory playbook.yml Conclusion Thanks to the Ansible collection for Quarkus, the work needed to automate the deployment of a Quarkus application is minimal. The collection takes care of most of the heavy lifting and allows its user to focus on the automation needs specific to their application and business needs.
In the ever-evolving landscape of software development, maintaining data integrity, ensuring security, and meeting regulatory compliance requirements are paramount. Auditing, the practice of meticulously tracking and recording changes to data and user actions within an application, emerges as a crucial component in achieving these objectives. In the context of Spring Boot, a versatile framework for building Java-based applications, auditing becomes not just a best practice but a necessity. In this comprehensive guide, we embark on a journey to explore the intricate world of auditing within Spring Boot applications. Our focus is on harnessing the combined power of three stalwarts in the Java ecosystem: Java Persistence API (JPA), Hibernate, and Spring Data JPA. By the end of this article, you will possess the knowledge and tools needed to implement robust auditing solutions tailored to your Spring Boot projects. Understanding Auditing in Spring Boot Auditing is a fundamental aspect of software development, especially when dealing with data-driven applications. It involves tracking changes to data, monitoring user actions, and maintaining a historical record of these activities. In the context of Spring Boot applications, auditing plays a crucial role in ensuring data integrity, security, and compliance with regulatory requirements. What Is Auditing? At its core, auditing involves recording and monitoring actions taken within a software application. These actions can include: Creation: Tracking when a new record or entity is added to the system. Modification: Monitoring changes made to existing data, including updates and edits. Deletion: Recording when data is removed or marked as deleted. Access Control: Keeping a log of who accessed certain data and when. Auditing serves several important purposes: Data Integrity: It helps maintain data consistency and prevents unauthorized or malicious changes. Security: Auditing aids in identifying and responding to security breaches or suspicious activities. Compliance: Many industries and applications require auditing to comply with regulations and standards. Troubleshooting: It simplifies the process of identifying and resolving issues by providing a historical trail of actions. Auditing With JPA JPA doesn’t explicitly contain an auditing API, but we can achieve this functionality by using entity lifecycle events. JPA, the Java Persistence API, plays a pivotal role in enabling auditing within your Spring Boot application. It introduces the concept of entity listeners, which allows you to define methods that respond to lifecycle events of JPA entities. For auditing purposes, we are particularly interested in three key lifecycle events: @PrePersist: This event occurs just before an entity is first saved in the database, making it suitable for capturing creation-related information. @PreUpdate: This event takes place just before an entity is updated in the database, making it ideal for tracking modifications. @PreRemove This event occurs just before an entity is removed from the database. Let's explore how to use these events effectively: Java @Entity public class Product { // ... @PrePersist protected void onCreate() { // Implement auditing logic here, e.g., setting creation timestamp and user. } @PreUpdate protected void onUpdate() { // Implement auditing logic here, e.g., updating modification timestamp and user. } @PreRemove protected void onRemove() { // Implement auditing logic here, e.g., setting deletion timestamp and user. } } Internal callback methods should always return void and take no arguments. They can have any name and any access level but shouldn’t be static. If we need to add such auditing to multiple classes, we can use @EntityListeners to centralize the code: Java @EntityListeners(AuditListener.class) @Entity public class Product { ... } Java public class AuditListener { @PrePersist @PreUpdate @PreRemove private void beforeAnyOperation(Object object) { ... } } Auditing With Hibernate Envers To set up Envers, we need to add the hibernate-envers JAR into our classpath: Gradle: Groovy dependencies { ... implementation 'org.hibernate:hibernate-envers:6.2.4.Final' ... } Maven: XML <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-envers</artifactId> <version>${hibernate.version}</version> </dependency> Then we add the @Audited annotation, either on an @Entity (to audit the whole entity) or on specific @Columns (if we need to audit specific properties only): Java @Entity @AuditTable(value = "au_drone") public class Drone implements Serializable { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; @NotEmpty private String serialNumber; private int weightLimit; @Audited private int batteryCapacity; ... } @AuditTable is an optional annotation that defines the name of the audit table in the database. In default case, Envers creates an audit table with the name of “tableName_AUD.” Audit table suffix (for all audit tables) and other configurations can be changed in the application properties file. In case of not to audit a field of an audited Entity, @NotAudited annotation should be used. As an example, password changes are not important for auditing. With @NotAudited annotation, changes of the “password” attribute will be ignored by Envers. Java @Entity @Audited public class User implements Serializable { @NotEmpty private String login; @NotEmpty private String userName; @NotAudited private String password; ... } You can automatically create audit log tables using set hibernate.hbm2ddl.auto property in a development environment for create, create-drop, or update. Spring Data JPA for Auditing Spring Data JPA, an integral part of the Spring Data project, simplifies database access with JPA and enhances auditing capabilities. In this section, we will explore the built-in auditing annotations provided by Spring Data JPA, including @CreatedBy, @CreatedDate, @LastModifiedBy, and @LastModifiedDate. These annotations empower you to effortlessly incorporate auditing attributes into your entity classes, automating the auditing process with ease. The Role of Spring Data JPA Annotations Spring Data JPA offers a suite of annotations designed specifically for auditing purposes. These annotations eliminate the need for manual configuration and coding, streamlining the implementation of auditing features in your Spring Boot application. Let's take a closer look at these annotations: @CreatedBy and @LastModifiedBy @CreatedBy: This annotation marks a field or property as the creator of an entity. @LastModifiedBy: It signifies the last user who modified an entity. Java @Entity public class Product { @CreatedBy @ManyToOne private User createdBy; @LastModifiedBy @ManyToOne private User lastModifiedBy; ... } In the example above, createdBy and lastModifiedBy are associated with a User entity, indicating the user responsible for the creation and last modification of a Product entity. @CreatedDate and @LastModifiedDate @CreatedDate: This annotation captures the timestamp when an entity is created. @LastModifiedDate: It records the timestamp of the last modification. Java @Entity public class Product { @CreatedDate private LocalDateTime createdDate; @LastModifiedDate private LocalDateTime lastModifiedDate; ... } In this example, createdDate and lastModifiedDate are used to store the timestamps of when the Product entity was created and last modified, respectively. Enabling Spring Data JPA Auditing To leverage Spring Data JPA's auditing annotations, you need to enable auditing in your Spring Boot application. Follow these steps: Step 1: Add @EnableJpaAuditing Annotation In your Spring Boot application's main configuration class, add the @EnableJpaAuditing annotation to enable Spring Data JPA's auditing features. Java @SpringBootApplication @EnableJpaAuditing public class YourApplication { public static void main(String[] args) { SpringApplication.run(YourApplication.class, args); } } Step 2: Use Auditing Annotations in Entity Classes Annotate your JPA entity classes with the relevant Spring Data JPA auditing annotations, as demonstrated in the previous examples. With auditing enabled and the appropriate annotations added to your entity classes, Spring Data JPA will automatically manage the audit fields for you. It will populate these fields with the necessary information during entity creation and modification. Benefits of Spring Data JPA Auditing Spring Data JPA's built-in auditing annotations provide several advantages: Simplicity: They eliminate the need for manual implementation of audit fields and logic. Consistency: The annotations ensure a standardized approach to auditing across your application. Automation: Auditing information is automatically captured, reducing the risk of errors. Flexibility: You can easily customize auditing behavior when needed. Conclusion In the realm of software development, where data security, accountability, and transparency are paramount, auditing is an essential practice. In this comprehensive guide, we've explored the intricacies of implementing auditing in Spring Boot applications using JPA, Hibernate, and Spring Data JPA. We began our journey by setting up a Spring Boot project, understanding the significance of a well-structured foundation. We then delved into the core concepts of auditing with JPA and Hibernate, learning how to capture data changes with ease. Spring Data JPA's auditing annotations proved to be a powerful tool in automating auditing processes. We harnessed these annotations to effortlessly track user actions, timestamps, and modification history.
To propose an implementation, we will present a use case that allows us to define the requirements. We will describe the functional and technical context in which we will operate and then specify the requirements. Based on these requirements, we will propose a Keycloak implementation to meet them and make the necessary adaptations on the Angular and Springboot side. Environment Functional Context This concerns an accountancy firm that provides services to external clients and has employed staff to manage the files. If a customer (external user) wishes to connect, they must create an account on the Saas application. In the same manner, when the staff (internal user) desire to work on the files, they must use their Active Directory account to log in. Users representation It is important to consider that customers and employees may share some rights but also have distinct ones. The two databases must not be affected, and any changes made to internal users should not affect customers. Technical Context The existing Saas product is divided into three components: Frontend, Backend, and database. Frontend It is an Angular application and is responsible for displaying information and collecting data that has been entered by internal and external users. Authorization to access specific pages must also be established. Backend It is built with SpringBoot and is expected to retrieve data from the database, interface with external APIs, and, most importantly, manage data access authorizations. It also manages configuration for the front end. Database A PostgreSQL database that stores and organizes data. Consequently, the application components will require modification to meet this requirement. Keycloak Authentication will utilize the OAuth2 protocol in OpenID Connect. Keycloak satisfies these and other requirements. Architecture One possible solution could be to have two entirely standalone Keycloak instances, which could lead to higher maintenance and infrastructure costs. Therefore, we will investigate the possibility of using a single instance of Keycloak. Realms To logically separate our users, we can use realms. We are going to create two Realms: Internal Realm: Users who will be pulled from Active Directory using UserFederation will be designated for the Internal Realm. External Realm: External users who require accounts within the software will be designated for the External Realm. Clients We will use two clients in each realm. The front-end client: It is a public client that is not confidential. We will make it available to the front-end component to obtain the login page, transmit the connection information, and enter the application. The back-end client: This client will be private, and access to it will require a secret. It can only be contacted by the backend application. The purpose of this client is to verify the JWT tokens sent by the front-end application. Roles The roles may differ because they are kingdom-specific. If some of them are common, you just need to give them the same name to refer to them while keeping the component code realm agnostic. Results Finally, we have the following architecture: Keycloak architecture NB: The roles could also be implemented at a realm level and a client level for greater precision. Deployments To make deployment easier, we’re going to use a docker-compose : YAML version: ’3’ services: keycloak: image: quay.io/keycloak/keycloak:22.0.1 ports: - "8080:8080" environment: - KEYCLOAK_ADMIN=admin - KEYCLOAK_ADMIN_PASSWORD=admin command: ["start-dev"] You can deploy your application just using ’docker-compose up -d’. Then, create the two realms. No special configurations are required. Then, create a client-front and client-back in each realm. For the client-front, you do not need to modify the default realm. For client-back, you will have to set ’Client authentication’ to ’On.’ Components Adaptation Now that we have Keycloak installed and configured, we need to customize the components. Frontend For the front end, we consider a simple Angular application. Configuration Keycloak proposes a javascript adapter. We will use it in combination with an angular adapter: npm install keycloakangular keycloak-js Code Adaptation Thanks to these libraries, we can use the keycloak initialization function in the app.module.ts when initializing the app, as follows: Declaration of the provider and use of the initializeKeycloak method. JSON { provide: APP_INITIALIZER, useFactory: initializeKeycloak, multi: true, deps: [KeycloakService, KeycloakConfigService] }, Declaration of initialiseKeycloak method : TypeScript export function initializeKeycloak( keycloak: KeycloakService, keycloakConfigService: KeycloakConfigService ) { // Set default realm let realm = "EXTERNAL"; const pathName: string[] = window.location.pathname.split('/'); if (pathName[1] === "EXTERNAL") { realm = "EXTERNAL"; } if (pathName[1] === "INTERNAL") { realm = "INTERNAL"; } return (): Promise<any> => { return new Promise<any>(async (resolve, reject) => { try { await initMultiTenant(keycloak, keycloakConfigService, realm); resolve(auth); } catch (error) { reject(error); } }); }; } export async function initMultiTenant( keycloak: KeycloakService, keycloakConfigService: KeycloakConfigService, realm: string ) { return keycloak.init({ config: { url: await firstValueFrom(keycloakConfigService .fetchConfig()).then( (conf: PublicKeycloakConfig) => { return conf?.url; } ), realm, clientId: 'front-client' }, initOptions: { onLoad: 'login-required', checkLoginIframe: false }, enableBearerInterceptor: true, bearerExcludedUrls: ['/public-configuration/keycloak'] }); } Backend In the backend, we should intercept incoming requests in order to: 1. Get the current realm to contact keycloak on the appropriate configuration. 2. Based on the previous realm, contact keycloak to validate the bearer token. Configuration To handle Keycloak interactions, we first need to import Keycloak adapters and Spring security to manage the Oauth2 process : <dependency> <groupId>org.springframework.boot</groupId> <artifactId> spring-boot-starter-security </artifactId> </dependency> <dependency> <groupId>org.keycloak</groupId> <artifactId> keycloak-spring-boot-starter </artifactId> <version>18.0.2</version> </dependency> <dependency> <groupId>org.keycloak.bom</groupId> <artifactId>keycloak-adapter-bom</artifactId> <version>18.0.2</version> <type>pom</type> <scope>import</scope> </dependency> Code Adaptation Now, we can intercept the incoming request to read the headers and identify the current realm of the request: @Override public KeycloakDeployment resolve(OIDCHttpFacade.Request request) { String header = request.getHeaders(CUSTOM_HEADER_REALM_SELECTOR) .stream().findFirst().orElse(null); if (EXT_XEN_REALM.equals(header)) { buildAdapterConfig(extXenKeycloakConfig); } else { buildAdapterConfig(intXenKeycloakConfig); } return KeycloakDeploymentBuilder.build(adapterConfig); } Conclusion Finally, we obtain the following architecture: Final architecture Step 1: We can contact our application with two different URLs: http://localhost:4200/external http://localhost:4200/internal Step 2: The front-end application asks Keycloak for the login page 3 with the realm as a parameter to let the user log in on the appropriate login page. Step 3: The login page is sent back to the front end. Step 4: The credentials are sent to Keycloak using keycloak-jsadatper, which allows for the secure transfer of this sensitive information. Step 5: If the credentials are valid4, an HTTP 302 is returned to redirect the user to the front-end home page. Step 6: A request is sent to the backend to retrieve data to display the home page. Step 7: After intercepting and parsing the request to retrieve the realm and Bearer, the backend resolver contacts the Keycloak server on the requested realm to validate the Bearer 3Only if the user is not connected yet 4 If not valid, a http-401:unauthaurized is returned token5. Step 8: Token validity is sent back to the backend server. Step 9: Finally, the backend can access the requested data and send it back to the frontend. By following these steps, we can ensure that a user lands on the correct login page and navigates through the application independently of their realm, all controlled by a single instance of Keycloak.
Redux is a popular state management library used with React and React Native to manage the application's state efficiently. While Redux provides many benefits, it can also present some challenges, especially when used in the context of React Native mobile app development. In this blog, we'll explore some common problems developers encounter when using Redux with React Native and how to address them. 1. Boilerplate Code Redux is known for its boilerplate code, which can be extensive. React Native projects tend to benefit from lean and concise codebases, so Redux's verbosity can be overwhelming. To mitigate this issue, consider using libraries like Redux Toolkit, which simplifies the setup and reduces boilerplate code. JavaScript javascript // Without Redux Toolkit const initialState = { value: 0 }; function counterReducer(state = initialState, action) { switch (action.type) { case 'increment': return { ...state, value: state.value + 1 }; case 'decrement': return { ...state, value: state.value - 1 }; default: return state; } } // With Redux Toolkit import { createSlice } from '@reduxjs/toolkit'; const counterSlice = createSlice({ name: 'counter', initialState: { value: 0 }, reducers: { increment: (state) => { state.value += 1; }, decrement: (state) => { state.value -= 1; }, }, }); export const { increment, decrement } = counterSlice.actions; export default counterSlice.reducer; 2. Managing Asynchronous Actions Handling asynchronous actions, such as network requests, in Redux can be tricky. Thunks, sagas, or middleware like Redux Thunk and Redux Saga are commonly used to manage asynchronous operations. These tools provide a structured way to perform side effects while maintaining the predictability of Redux actions. JavaScript javascript // Using Redux Thunk for async actions const fetchUserData = (userId) => async (dispatch) => { try { dispatch({ type: 'user/fetch/start' }); const response = await fetch(`https://api.example.com/users/${userId}`); const data = await response.json(); dispatch({ type: 'user/fetch/success', payload: data }); } catch (error) { dispatch({ type: 'user/fetch/error', payload: error }); } }; 3. State Shape Design Designing the shape of your Redux state is crucial, as a poorly designed state can lead to complex selectors and unnecessary re-renders. It's recommended to normalize your state, especially when dealing with relational data, to simplify state management and improve performance. 4. Excessive Re-renders Redux's connect function and useSelector hook can lead to excessive re-renders if not used carefully. Memoization techniques, such as useMemo or libraries like Reselect, can help optimize selectors to prevent unnecessary component re-renders. JavaScript javascript // import { useSelector } from 'react-redux'; const selectedData = useSelector((state) => { // Expensive selector logic return computeSelectedData(state); }, shallowEqual); 5. Debugging Challenges Debugging Redux-related issues can be challenging, especially in larger codebases. Tools like Redux DevTools Extension and React Native Debugger can help you inspect and trace the flow of actions, but understanding the entire Redux flow may require some learning. 6. Learning Curve Redux, with its concepts of actions, reducers, and stores, can have a steep learning curve for beginners. It's important to invest time in understanding Redux thoroughly and practicing its concepts. Conclusion While Redux is a powerful state management library, it does come with its share of challenges when used in React Native development. By addressing these common problems and adopting best practices like using Redux Toolkit, handling asynchronous actions with middleware, and optimizing state shape and selectors, you can harness the full potential of Redux in your React Native projects. Remember that Redux is a valuable tool, but its usage should be tailored to the specific needs of your project.
Justin Albano
Software Engineer,
IBM
Thomas Hansen
CTO,
AINIRO.IO
Hiren Dhaduk
CTO,
Simform
Tetiana Stoyko
CTO, Co-Founder,
Incora Software Development