Also known as the build stage of the SDLC, coding focuses on the writing and programming of a system. The Zones in this category take a hands-on approach to equip developers with the knowledge about frameworks, tools, and languages that they can tailor to their own build needs.
A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
Programming languages allow us to communicate with computers, and they operate like sets of instructions. There are numerous types of languages, including procedural, functional, object-oriented, and more. Whether you’re looking to learn a new language or trying to find some tips or tricks, the resources in the Languages Zone will give you all the information you need and more.
Development and programming tools are used to build frameworks, and they can be used for creating, debugging, and maintaining programs — and much more. The resources in this Zone cover topics such as compilers, database management systems, code editors, and other software tools and can help ensure engineers are writing clean code.
Development at Scale
As organizations’ needs and requirements evolve, it’s critical for development to meet these demands at scale. The various realms in which mobile, web, and low-code applications are built continue to fluctuate. This Trend Report will further explore these development trends and how they relate to scalability within organizations, highlighting application challenges, code, and more.
Mastering Backpressure in Java: Concepts, Real-World Examples, and Implementation
In this Java 21 tutorial, we dive into virtual threads, a game-changing feature for developers. Virtual threads are a lightweight and efficient alternative to traditional platform threads, designed to simplify concurrent programming and enhance the performance of Java applications. In this article, we’ll explore the ins and outs of virtual threads, their benefits, compatibility, and the migration path to help you leverage this powerful Java 21 feature. Introducing Virtual Threads Virtual threads represent a significant evolution in the Java platform’s threading model. They are designed to address the challenges of writing, maintaining, and optimizing high-throughput concurrent applications. It’s essential to differentiate virtual threads from traditional platform threads to understand them. In traditional Java, every instance of java.lang.Thread is a platform thread. A platform thread runs Java code on an underlying OS thread and occupies that OS thread for the duration of its execution. It means that the number of platform threads is limited to the number of available OS threads, leading to potential resource constraints and suboptimal performance in highly concurrent applications. On the other hand, a virtual thread is also an instance of java.lang.Thread, but it operates differently. Virtual threads run Java code on an underlying OS thread without capturing the OS thread for its entire lifecycle. This crucial difference means multiple virtual threads can share the same OS thread, offering a highly efficient way to utilize system resources. Unlike platform threads, virtual threads do not monopolize precious OS threads, which can lead to a significantly higher number of virtual threads than the number of available OS threads. The Roots of Virtual Threads Virtual threads draw inspiration from user-mode threads, successfully employed in other multithreaded languages such as Go (with goroutines) and Erlang (with processes). In the early days of Java, user-mode threads were implemented as “green threads” due to the immaturity and limited support for OS threads. These green threads were eventually replaced by platform threads, essentially wrappers for OS threads, operating under a 1:1 scheduling model. Virtual threads take a more sophisticated approach, using an M:N scheduling model. In this model, many virtual threads (M) are scheduled to run on fewer OS threads (N). This M:N scheduling approach allows Java applications to achieve a high concurrency level without the resource constraints typically associated with platform threads. Leveraging Virtual Threads In Java 21, developers can easily harness the power of virtual threads. A new thread builder is introduced to create virtual and platform threads, providing flexibility and control over the threading model. To create a virtual thread, you can use the following code snippet: Java Thread.Builder builder = Thread.ofVirtual().name("Virtual Thread"); Runnable task = () -> System.out.println("Hello World"); Thread thread = builder.start(task); System.out.println(thread.getName()); thread.join(); It’s important to note that virtual threads are significantly cheaper in terms of resource usage when compared to platform threads. You can create multiple virtual threads, allowing you to exploit the advantages of this new threading model fully: Java Thread.Builder builder = Thread.ofVirtual().name("Virtual Thread", 0); Runnable task = () -> System.println("Hello World: " + Thread.currentThread().threadId()); Thread thread1 = builder.start(task); Thread thread2 = builder.start(task); thread1.join(); thread2.join(); Virtual threads can also be effectively utilized with the ExecutorService, as demonstrated in the code below: Java try (ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor()) { Future<String> future = executor.submit(() -> "Hello World"); System.out.println(future.get()); System.println("The end!"); } The Virtual vs. Platform Thread Trade-Off It’s crucial to understand that platform threads are not deprecated in Java 21, and virtual threads are not a one-size-fits-all solution. Each type of thread has its own set of trade-offs, and the choice between them should be made based on your application’s specific requirements. Virtual threads: Virtual threads are excellent for high-throughput concurrent tasks, especially when managing many lightweight threads without OS thread limitations. They are well-suited for I/O-bound operations, event-driven tasks, and workloads with many short-lived threads. Platform threads: Platform threads are still valuable for applications where fine-grained control over thread interactions is essential. They are ideal for CPU-bound operations, real-time applications, and scenarios that require precise thread management. In conclusion, Java 21’s virtual threads are a groundbreaking addition to the Java platform, offering developers a more efficient and scalable way to handle concurrency. By understanding the differences and trade-offs between virtual and platform threads, you can make informed decisions on when and how to leverage these powerful features to unlock the full potential of your Java applications. Video References Source JPE
Java was the first language I used professionally and is the scale by which I measure other languages I learned afterward. It's an OOP statically-typed language. Hence, Python feels a bit weird because of its dynamic typing approach. For example, Object offers methods equals(), hashCode(), and toString(). Because all other classes inherit from Object, directly or indirectly, all objects have these methods by definition. Conversely, Python was not initially built on OOP principles and is dynamically typed. Yet, any language needs cross-cutting features on unrelated objects. In Python, these are specially-named methods: methods that the runtime interprets in a certain way but that you need to know about. You can call them magic methods. The documentation is pretty exhaustive, but it needs examples for beginners. The goal of this post is to list most of these methods and provide these examples so that I can remember them. I've divided it into two parts to make it more digestible. Lifecycle Methods Methods in this section are related to the lifecycle of new objects. object.__new__(cls[, ...]) The __new()__ method is static, though it doesn't need to be explicitly marked as such. The method must return a new object instance of type cls; then, the runtime will call the __init__() (see below) method on the new instance. __new__() is meant to customize instance creation of subclasses of immutable classes. Python class FooStr(str): #1 def __new__(cls, value): return super().__new__(cls, f'{value}Foo') #2 print(FooStr('Hello')) #3 Inherit from str. Create a new str instance, whose value is the value passed to the constructor, suffixed with Foo. Print HelloFoo. object.__init__(self[, ...]) __init__() is the regular initialization method, which you probably know if you've read any basic Python tutorial. The most significant difference with Java is that the superclass __init__() method has no implicit calling. One can only wonder how many bugs were introduced because somebody forgot to call the superclass method. __init__() differs from a constructor in that the object is already created. Python class Foo: def __init__(self, a, b, c): #1 self.a = a #2 self.b = b #2 self.c = c #2 foo = Foo('one', 'two', 'three') print(f'a={foo.a}, b={foo.b}, c={foo.c}') #3 The first parameter is the instance itself. Initialize the instance. Print a=one, b=two, c=three. object.__del__(self) If __init()__ is akin to an initializer, then __del__() is it's finalizer. As in Java, finalizers are unreliable, e.g., there's no guarantee that the interpreter finalizes instances when it shuts down. Representation Methods Python offers two main ways to represent objects: one "official" for debugging purposes, and the other "informal." You can use the former to reconstruct the object. The official representation is expressed via the object.__repr__(self). The documentation states that the representation must be "information-rich and unambiguous." Python class Foo: def __init__(self, a, b, c): self.a = a self.b = b self.c = c def __repr__(self): return f'Foo(a={foo.a}, b={foo.b}, c={foo.c})' foo = Foo('one', 'two', 'three') print(foo) #1 Print Foo(a=one, b=two, c=three). My implementation returns a string, though it's not required. Yet, you can reconstruct the object with the information displayed. The object.__str__(self) handles the unofficial representation. As its name implies, it must return a string. The default calls __repr__(). Aside from the two methods above, the object.__format__(self, format_spec) method returns a string representation of the object. The second argument follows the rules of the Format Specification Mini-Language. Note that the method must return a string. It's a bit involved so I won't implement it. Finally, the object.__bytes__(self) returns a byte representation of the object. Python from pickle import dumps #1 class Foo: def __init__(self, a, b, c): self.a = a self.b = b self.c = c def __repr__(self): return f'Foo(a={foo.a}, b={foo.b}, c={foo.c})' def __bytes__(self): return dumps(self) #2 foo = Foo('one', 'two', 'three') print(bytes(foo)) #3 Use the pickle serialization library. Delegate to the dumps() method. Print the byte representation of foo. Comparison Methods Let's start with similarities with Java: Python has two methods, object.__eq__(self, other) and object.__hash__(self), that work in the same way. If you define __eq__() for a class, you must define __hash__() as well. Contrary to Java, if you don't define the former, you must not define the latter. Python class Foo: def __init__(self, a, b): self.a = a self.b = b def __eq__(self, other): if not isinstance(other, Foo): #1 return false return self.a == other.a and self.b == other.b #2 def __hash__(self): return hash(self.a + self.b) #3 foo1 = Foo('one', 'two') foo2 = Foo('one', 'two') foo3 = Foo('un', 'deux') print(hash(foo1)) print(hash(foo2)) print(hash(foo3)) print(foo1 == foo2) #4 print(foo2 == foo3) #5 Objects that are not of the same type are not equal by definition. Compare the equality of attributes. The hash consists of the addition of the two attributes. Print True. Print False. As in Java, __eq__()__ and __hash__() have plenty of gotchas. Some of them are the same, others not. I won't paraphrase the documentation; have a look at it. Other comparison methods are pretty self-explanatory: Method Operator object.__lt__(self, other) < object.__le__(self, other) `` object.__ge__(self, other) >= object.__ne__(self, other) != Python class Foo: def __init__(self, a): self.a = a def __ge__(self, other): return self.a >= other.a #1 def __le__(self, other): return self.a <= other.a #1 foo1 = Foo(1) foo1 = Foo(1) foo2 = Foo(2) print(foo1 >= foo1) #2 print(foo1 >= foo2) #3 print(foo1 <= foo1) #4 print(foo2 <= foo2) #5 Compare the single attribute. Print True. Print False. Print True. Print True. Note that comparison methods may return something other than a boolean. In this case, Python will transform the value in a boolean using the bool() function. I advise you not to use this implicit conversion. Attribute Access Methods As seen above, Python allows accessing an object's attributes via the dot notation. If the attribute doesn't exist, Python complains: 'Foo' object has no attribute 'a'. However, it's possible to define synthetic accessors on a class, via the object.__getattr__(self, name) and object.__setattr__(self, name, value) methods. The rule is that they are fallbacks: if the attribute doesn't exist, Python calls the method. Python class Foo: def __init__(self, a): self.a = a def __getattr__(self, attr): if attr == 'a': return 'getattr a' #1 if attr == 'b': return 'getattr b' #2 foo = Foo('a') print(foo.a) #3 print(foo.b) #4 print(foo.c) #5 Return the string if the requested attribute is a. Return the string if the requested attribute is b. Print a. Print getattr b. Print None. For added fun, Python also offers the object.__getattribute__(self, name). The difference is that it's called whether the attribute exists or not, effectively shadowing it. Python class Foo: def __init__(self, a): self.a = a def __getattribute__(self, attr): if attr == 'a': return 'getattr a' #1 if attr == 'b': return 'getattr b' #2 foo = Foo('a') print(foo.a) #3 print(foo.b) #4 print(foo.c) #5 Return the string if the requested attribute is a. Return the string if the requested attribute is b. Print getattr a. Print getattr b. Print None. The dir() function allows returning an object's list of attributes and methods. You can set the list using the object.__dir__(self)__ method. By default, the list is empty: you need to set it explicitly. Note that it's the developer's responsibility to ensure the list contains actual class members. Python class Foo: def __init__(self, a): self.a = 'a' def __dir__(self): #1 return ['a', 'foo'] foo = Foo('one') print(dir(foo)) #2 Implement the method. Display ['a', 'foo']; Python sorts the list. Note that there's no foo member, though. Descriptors Python descriptors are accessors delegates, akin to Kotlin's delegated properties. The idea is to factor a behavior somewhere so other classes can reuse it. In this way, they are the direct consequence of favoring composition over inheritance. They are available for getters, setters, and finalizers, respectively: object.__get__(self, instance, owner=None) object.__set__(self, instance, value) object.__delete__(self, instance) Let's implement a lazy descriptor that caches the result of a compute-intensive operation. Python class Lazy: #1 def __init__(self): self.cache = {} #2 def __get__(self, obj, objtype=None): if obj not in self.cache: self.cache[obj] = obj._intensiveComputation() #3 return self.cache[obj] class Foo: lazy = Lazy() #4 def __init__(self, name): self.name = name self.count = 0 #5 def _intensiveComputation(self): self.count = self.count + 1 #6 print(self.count) #7 return self.name foo1 = Foo('foo1') foo2 = Foo('foo2') print(foo1.lazy) #8 print(foo1.lazy) #8 print(foo2.lazy) #9 print(foo2.lazy) #9 Define the descriptor. Initialize the cache. Call the intensive computation. Conclusion This concludes the first part of Python magic methods. The second part will focus on class, container, and number-related methods.
There are various methods of visualizing three-dimensional objects in two-dimensional space. For example, most 3D graphics engines use perspective projection as the main form of projection. This is because perspective projection is an excellent representation of the real world, in which objects become smaller with increasing distance. But when the relative position of objects is not important, and for a better understanding of the size of objects, you can use parallel projections. They are more common in engineering and architecture, where it is important to maintain parallel lines. Since the birth of computer graphics, these projections have been used to render 3D scenes when 3D rendering hardware acceleration was not possible. Recently, various forms of parallel projections have become a style choice for digital artists, and they are used to display objects in infographics and in digital art in general. The purpose of this article is to show how to create and manipulate isometric views in SVG and how to define these objects using, in particular, the JointJS library. To illustrate SVG’s capabilities in creating parallel projections, we will use isometric projection as an example. This projection is one of the dominant projection types because it allows you to maintain the relative scale of objects along all axes. Isometric Projection Let’s define what isometric projection is. First of all, it is a parallel type of projection in which all lines from a “camera” are parallel. It means that the scale of an object does not depend on the distance between the “camera” and the object. And specifically, in isometric (which means “equal measure” in Greek) projection, scaling along each axis is the same. This is achieved by defining equal angles between all axes. In the following image, you can see how axes are positioned in isometric projection. Keep in mind that in this article, we will be using a left-handed coordinate system. One of the features of the isometric projection is that it can be deconstructed into three different 2D projections: top, side, and front projections. For example, a cuboid can be represented by three rectangles on each 2D projection and then combined into one isometric view. The next image represents separate projections of an object using the left-handed coordinate system. Separate views of the orthographic projection Then, we can combine them into one isometric view: Isometric view of the example object The challenge with SVG is that it contains 2D objects which are located on one XY-plane. But we can overcome this by combining all projections in one plane and then separately applying a transformation to every object. SVG Isometric View Transformations In 3D, to create an isometric view, we can move the camera to a certain position, but SVG is purely a 2D format, so we have to create a workaround to build such a view. We recommend reading Cody Walker’s article that presents a method for creating isometric representations from 2D object views — top, side, and front projections. Based on the article, we need to create transformations for each 2D projection of the object separately. First, we need to rotate our plane by 30 degrees. And then, we will skew our 2D image by -30 degrees. This transformation will align our axes with the axes of the isometric projection. Then, we need to use a scale operator to scale our 2D projection down vertically by 0.8602. We need to do it due to the fact of isometric projection distortion. Let’s introduce some SVG features that will help us implement isometric projection. The SVG specification allows users to specify a particular transformation in the transform attribute of an SVG element. This attribute helps us apply a linear transformation to the SVG element. To transform 2D projection into an isometric view, we need to apply scale, rotate, and skew operators. To represent the transformation in code, we can use the DOMMatrixReadOnly object, which is a browser API, to represent the transformation matrix. Using this interface, we can create a matrix as follows: JavaScript const isoMatrix = new DOMMatrixReadOnly() .rotate(30) .skewX(-30) .scale(1, 0.8602); This interface allows building a transformation matrix using our values, and then we can apply the resulting value to thetransform attribute using the matrix function. In SVG, we can present only one 2D space at a time, so for our conversion, we will be using top projection as a base projection. This is mostly because axes in this projection correspond with axes in a normal SVG viewport. To demonstrate SVG possibilities, we will be using the JointJS library. We defined a rectangular grid in the XY-plane with a cell width of 20. Let’s define SVG for the elements on the top projection from the example. To properly render this object, we need to specify two polygons for two levels of our object. Also, we can apply a translate transformation for our element in 2D space using DOMMatrix: JavaScript // Translate transformation for Top1 Element const matrix2D = new DOMMatrixReadOnly() .translate(200, 200); HTML <!--Top1 element--> <polygon joint-selector="body" id="v-4" stroke-width="2" stroke="#333333" fill="#ff0000" fill-opacity="0.7" points="0,0 60,0 60,20 40,20 40,60 0,60" transform="matrix(1,0,0,1,200,200)"> </polygon> <!--Top2 element--> <polygon joint-selector="body" id="v-6" stroke-width="2" stroke="#333333" fill="#ff0000" fill-opacity="0.7" points="0,0 20,0 20,40 0,40" transform="matrix(1,0,0,1,240,220)"> </polygon> Then, we can apply our isometric matrix to our elements. Also, we will add a translate transformation to position elements in the right place: JavaScript const isoMatrix = new DOMMatrixReadOnly() .rotate(30) .skewX(-30) .scale(1, 0.8602); const top1Matrix = isoMatrix.translate(200, 200); const top2Matrix = isoMatrix.translate(240, 220); Isometric view without height adjustment For simplicity, let’s assume that our element’s base plane is located on the XY plane. Therefore, we need to translate the top view so it will be viewed as it is located on the top of the object. To do it, we can just translate the projection by its Z coordinate on the scaled SVG space as follows. The Top1 element has an elevation of 80, so we should translate it by (-80, -80). Similarly, the Top2 element has an elevation of 40. We can just apply these translations to our existing matrix: JavaScript const top1MatrixWithHeight = top1Matrix.translate(-80, -80); const top2MatrixWithHeight = top1Matrix.translate(-40, -40); Final isometric view of top projection In the end, we will have the following transform attributes for Top1 and Top2 elements. Note that they differ only in the two last values, which represent the translate transformation: JavaScript // Top1 element transform="matrix(0.8660254037844387,0.49999999999999994,-0.8165000081062317,0.47140649947346464,5.9,116.6)" // Top2 element transform="matrix(0.8660254037844387,0.49999999999999994,-0.8165000081062317,0.47140649947346464,26.2,184.9)" To create an isometric view of side and front projections, we need to make a net so we can place all projections on 2D SVG space. Let’s create a net by attaching side and front views similar to the classic cube net: Then, we need to skewX side and front projections by 45 degrees. It will allow us to align the Z-axis for all projections. After this transformation, we will get the following image: Prepared 2D projection Then, we can apply our isoMatrix to this object: Isometric projection without depth adjustments In every projection, there are parts that have a different 3rd coordinate value. Therefore, we need to adjust this depth coordinate for every projection as we did with the top projection and its Z coordinate. In the end, we will get the following isometric view: Final isometric view of the object Using JointJS for the Isometric Diagram JointJS allows us to create and manipulate such objects with ease due to its elements framework and wide set of tools. Using JointJS, we can define and control isometric objects to build powerful isometric diagrams. Remember the basic isometric transformation from the beginning of the article? JavaScript const isoMatrix = new DOMMatrixReadOnly() .rotate(30) .skewX(-30) .scale(1, 0.8602); In the JointJS library, we can apply this transformation to the whole object which stores all SVG elements, and then simply apply the object-specific transformations on top of this. Isometric Grid Rendering JointJS has great capabilities in the rendering of custom SVG markup. Utilizing JointJS, we can generate a path that is aligned to an untransformed grid and have it transformed automatically with the grid, thanks to the global paper transformation that we mentioned previously. You can see the grid and how we interpret the coordinate system in the demo below. Note that we can dynamically change the paper transformation, which allows us to change the view on the fly: Isometric grid Creating a Custom Isometric SVG Element Here, we show a custom SVG Isometric shape in JointJS. In our example, we use the isometricHeight property to store information about a third dimension and then use it to render our isometric object. The following snippet shows how you can call the custom createIsometricElement function to alter object properties: JavaScript const element = createIsometricElement({ isometricHeight: GRID_SIZE * 3, size: { width: GRID_SIZE * 3, height: GRID_SIZE * 6 }, position: { x: GRID_SIZE * 6, y: GRID_SIZE * 6 } }); In the following demo, you can see that our custom isometric element can be moved like an ordinary element on the isometric grid. You can change dimensions by altering the parameters of the createIsometricElement function in the source code (when you click “Edit on CodePen”): Custom isometric element on the isometric grid Z-Index Calculation in Isometric Diagrams One of the problems with an isometric view is placing elements respective to their relative position. Unlike in a 2D plane, in an isometric view, objects have perceived height and can be placed one behind the other. We can achieve this behavior in SVG by placing them into the DOM in the right order. To define the order in our case, we can use the JointJS z attribute, which allows sending the correct element to the background so that it can be overlapped/hidden by the other element as expected. You can find more information about this problem in a great article by Andreas Hager. We decided to sort the elements using the topological sorting algorithm. The algorithm consists of two steps. First, we need to create a special graph, and then we need to use a depth-first search for that graph to find the correct order of elements. As the first step, we need to populate the initial graph — for each object, we need to find all objects behind it. We can do that by comparing the positions of their bottom sides. Let’s illustrate this step with images — let’s, for example, take three elements which are positioned like this: We have marked the bottom side of each object in the second image. Using this data, we will create a graph structure that will model topological relations between elements. In the image, you can see how we define the points on the bottom side — we can find the relative position of all elements by comparing aMax and bMin points. We define that if the x and y coordinates of point bMin are less than the coordinates of point aMax , then object b is located behind object a. Algorithm data in a 2D space Comparing the three elements from our previous example, we can produce the following graph: Topological graph After that, we need to use a variation of the depth-first search algorithm to find the correct rendering order. A depth-first search allows us to visit graph nodes according to the visibility order, starting from the most distant one. Here is a library-agnostic example of the algorithm: JavaScript const sortElements = (elements: Rect[]) => { const nodes = elements.map((el) => { return { el: el, behind: [], visited: false, depth: null, }; }); for (let i = 0; i < nodes.length; ++i) { const a = nodes[i].el; const aMax = aBBox.bottomRight(); for (let j = 0; j < nodes.length; ++j) { if (i != j) { const b = nodes[j].el; const bMin = bBBox.topLeft(); if (bMin.x < aMax.x && bMin.y < aMax.y) { nodes[i].behind.push(nodes[j]); } } } } const sortedElements = depthFirstSearch(nodes); return sortedElements; }; const depthFirstSearch = (nodes) => { let depth = 0; let sortedElements = []; const visitNode = (node) => { if (!node.visited) { node.visited = true; for (let i = 0; i < node.behind.length; ++i) { if (node.behind[i] == null) { break; } else { visitNode(node.behind[i]); delete node.behind[i]; } } node.depth = depth++; sortedElements.push(node.el); } }; for (let i = 0; i < nodes.length; ++i) { visitNode(nodes[i]); } return sortedElements; }; This method can be implemented easily using the JointJS library — in the following CodePen, we use a special JointJS event to recalculate z-indexes of our elements whenever the position of an element is changed. As outlined above, we use a special z property of the element model to specify rendering order and assign it during the depth-first traversal. (Note that the algorithm’s behavior is undefined in the case of intersecting elements due to the nature of the implementation of isometric objects.) Z-index calculations for isometric diagrams The JointJS Demo We have created a JointJS demo that combines all of these methods and techniques and also allows you to easily switch between 2D and isometric SVG markup. Crucially, as you can see, the powerful features of JointJS (which allow us to move elements, connect them with links, and create tools to edit them, among others) work just as well in the isometric view as they do in 2D. You can see the demo here. Throughout this article, we used our open-source JointJS library for illustration. However, since you were so thorough with your exploration, we would like to extend to you an invitation to our no-commitment 30-day trial of JointJS+, an advanced commercial extension of JointJS. It will allow you to experience additional powerful tools for creating delightful diagrams.
Python is becoming a more and more popular choice among developers for a diverse range of applications. However, as with any language, effectively scaling Python services can pose challenges. This article explains concepts that can be leveraged to better scale your applications. By understanding CPU-bound versus I/O-bound tasks, the implications of the Global Interpreter Lock (GIL), and the mechanics behind thread pools and asyncio, we can better scale Python applications. CPU-Bound vs. I/O-Bound: The Basics CPU-Bound Tasks: These tasks involve heavy calculations, data processing, and transformations, demanding significant CPU power. I/O-Bound Tasks: These tasks typically wait on external resources, such as reading from or writing to databases, files, or network operations. Recognizing if your service is primarily CPU-bound or I/O-bound is the foundation for effective scaling. Concurrency vs. Parallelism: A Simple Analogy Imagine multitasking on a computer: Concurrency: You have multiple applications open. Even if only one is active at a moment, you quickly switch between them, giving the illusion of them running simultaneously. Parallelism: Multiple applications genuinely run at the same time, like running calculations on a spreadsheet while downloading a file. In a single-core CPU scenario, concurrency involves rapidly switching tasks, while parallelism allows multiple tasks to execute simultaneously. Global Interpreter Lock: GIL You might think scaling CPU-bound Python services is as simple as adding more CPU power. However, the Global Interpreter Lock (GIL) in Python's standard implementation complicates this. The GIL is a mutex ensuring only one thread executes Python bytecode at a time, even on multi-core machines. This constraint means that CPU-bound tasks in Python can't fully harness the power of multithreading due to the GIL. Scaling Solutions: I/O-Bound and CPU-Bound ThreadPoolExecutor This class provides an interface for asynchronously executing functions using threads. Though threads in Python are ideal for I/O-bound tasks (since they can release the GIL during I/O operations), they are less effective for CPU-bound tasks due to the GIL. Asyncio Suited for I/O-bound tasks, asyncio offers an event-driven framework for asynchronous I/O operations. It employs a single-threaded model, yielding control back to the event loop for other tasks during I/O waits. Compared to threads, asyncio is leaner and avoids overheads like thread context switches. Here's a practical comparison. We take an example of fetching URL data (I/O bound) and do this without threads, with a thread pool, and using asyncio. Python import requests import timeit from concurrent.futures import ThreadPoolExecutor import asyncio URLS = [ "https://www.example.com", "https://www.python.org", "https://www.openai.com", "https://www.github.com" ] * 50 # Function to fetch URL data def fetch_url_data(url): response = requests.get(url) return response.text # 1. Sequential def main_sequential(): return [fetch_url_data(url) for url in URLS] # 2. ThreadPool def main_threadpool(): with ThreadPoolExecutor(max_workers=4) as executor: return list(executor.map(fetch_url_data, URLS)) # 3. Asyncio with Requests async def main_asyncio(): loop = asyncio.get_event_loop() futures = [loop.run_in_executor(None, fetch_url_data, url) for url in URLS] return await asyncio.gather(*futures) def run_all_methods_and_time(): methods = [ ("Sequential", main_sequential), ("ThreadPool", main_threadpool), ("Asyncio", lambda: asyncio.run(main_asyncio())) ] for name, method in methods: start_time = timeit.default_timer() method() elapsed_time = timeit.default_timer() - start_time print(f"{name} execution time: {elapsed_time:.4f} seconds") if __name__ == "__main__": run_all_methods_and_time() Results Sequential execution time: 37.9845 seconds ThreadPool execution time: 13.5944 seconds Asyncio execution time: 3.4348 seconds The results reveal that asyncio is efficient for I/O-bound tasks due to minimized overhead and the absence of data synchronization requirements, as seen with multithreading. For CPU-bound tasks, consider: Multiprocessing: Processes don't share the GIL, making this approach suitable for CPU-bound tasks. However, ensure that the overhead of spawning processes and inter-process communication doesn't diminish the performance benefits. PyPy: An alternative Python interpreter with a Just-In-Time (JIT) compiler. PyPy can deliver performance improvements, especially for CPU-bound tasks. Here, we have an example of regex matching (CPU bound). We implement it using without any optimization and using multiprocessing. Python import re import timeit from multiprocessing import Pool import random import string # Complex regex pattern for non-repeating characters. PATTERN_REGEX = r"(?:(\w)(?!.*\1)){10}" def find_pattern(s): """Search for the pattern in a given string and return it, or None if not found.""" match = re.search(PATTERN_REGEX, s) return match.group(0) if match else None # Generating a dataset of random strings data = [''.join(random.choices(string.ascii_letters + string.digits, k=1000)) for _ in range(1000)] def concurrent_execution(): with Pool(processes=4) as pool: results = pool.map(find_pattern, data) def sequential_execution(): results = [find_pattern(s) for s in data] if __name__ == "__main__": # Timing both methods concurrent_time = timeit.timeit(concurrent_execution, number=10) sequential_time = timeit.timeit(sequential_execution, number=10) print(f"Concurrent execution time (multiprocessing): {concurrent_time:.4f} seconds") print(f"Sequential execution time: {sequential_time:.4f} seconds") Results Concurrent execution time (multiprocessing): 8.4240 seconds Sequential execution time: 12.8772 secondsClearly, multiprocessing is better than sequential execution. The results will be far more evident with a real-world use case. Conclusion Scaling Python services hinges on recognizing the nature of tasks (CPU-bound or I/O-bound) and choosing the appropriate tools and strategies. For I/O bound services, consider using thread pool executors or asyncio, whereas for CPU-bound services, consider leveraging multiprocessing.
Many libraries for AI app development are primarily written in Python or JavaScript. The good news is that several of these libraries have Java APIs as well. In this tutorial, I'll show you how to build a ChatGPT clone using Spring Boot, LangChain, and Hilla. The tutorial will cover simple synchronous chat completions and a more advanced streaming completion for a better user experience. Completed Source Code You can find the source code for the example in my GitHub repository. Requirements Java 17+ Node 18+ An OpenAI API key in an OPENAI_API_KEY environment variable Create a Spring Boot and React project, Add LangChain First, create a new Hilla project using the Hilla CLI. This will create a Spring Boot project with a React frontend. Shell npx @hilla/cli init ai-assistant Open the generated project in your IDE. Then, add the LangChain4j dependency to the pom.xml file: XML <dependency> <groupId>dev.langchain4j</groupId> <artifactId>langchain4j</artifactId> <version>0.22.0</version> <!-- TODO: use latest version --> </dependency> Simple OpenAI Chat Completions With Memory Using LangChain We'll begin exploring LangChain4j with a simple synchronous chat completion. In this case, we want to call the OpenAI chat completion API and get a single response. We also want to keep track of up to 1,000 tokens of the chat history. In the com.example.application.service package, create a ChatService.java class with the following content: Java @BrowserCallable @AnonymousAllowed public class ChatService { @Value("${openai.api.key}") private String OPENAI_API_KEY; private Assistant assistant; interface Assistant { String chat(String message); } @PostConstruct public void init() { var memory = TokenWindowChatMemory.withMaxTokens(1000, new OpenAiTokenizer("gpt-3.5-turbo")); assistant = AiServices.builder(Assistant.class) .chatLanguageModel(OpenAiChatModel.withApiKey(OPENAI_API_KEY)) .chatMemory(memory) .build(); } public String chat(String message) { return assistant.chat(message); } } @BrowserCallable makes the class available to the front end. @AnonymousAllowed allows anonymous users to call the methods. @Value injects the OpenAI API key from the OPENAI_API_KEY environment variable. Assistant is the interface that we will use to call the chat API. init() initializes the assistant with a 1,000-token memory and the gpt-3.5-turbo model. chat() is the method that we will call from the front end. Start the application by running Application.java in your IDE, or with the default Maven goal: Shell mvn This will generate TypeScript types and service methods for the front end. Next, open App.tsx in the frontend folder and update it with the following content: TypeScript-JSX export default function App() { const [messages, setMessages] = useState<MessageListItem[]>([]); async function sendMessage(message: string) { setMessages((messages) => [ ...messages, { text: message, userName: "You", }, ]); const response = await ChatService.chat(message); setMessages((messages) => [ ...messages, { text: response, userName: "Assistant", }, ]); } return ( <div className="p-m flex flex-col h-full box-border"> <MessageList items={messages} className="flex-grow" /> <MessageInput onSubmit={(e) => sendMessage(e.detail.value)} /> </div> ); } We use the MessageList and MessageInput components from the Hilla UI component library. sendMessage() adds the message to the list of messages, and calls the chat() method on the ChatService class. When the response is received, it is added to the list of messages. You now have a working chat application that uses the OpenAI chat API and keeps track of the chat history. It works great for short messages, but it is slow for long answers. To improve the user experience, we can use a streaming completion instead, displaying the response as it is received. Streaming OpenAI Chat Completions With Memory Using LangChain Let's update the ChatService class to use a streaming completion instead: Java @BrowserCallable @AnonymousAllowed public class ChatService { @Value("${openai.api.key}") private String OPENAI_API_KEY; private Assistant assistant; interface Assistant { TokenStream chat(String message); } @PostConstruct public void init() { var memory = TokenWindowChatMemory.withMaxTokens(1000, new OpenAiTokenizer("gpt-3.5-turbo")); assistant = AiServices.builder(Assistant.class) .streamingChatLanguageModel(OpenAiStreamingChatModel.withApiKey(OPENAI_API_KEY)) .chatMemory(memory) .build(); } public Flux<String> chatStream(String message) { Sinks.Many<String> sink = Sinks.many().unicast().onBackpressureBuffer(); assistant.chat(message) .onNext(sink::tryEmitNext) .onComplete(sink::tryEmitComplete) .onError(sink::tryEmitError) .start(); return sink.asFlux(); } } The code is mostly the same as before, with some important differences: Assistant now returns a TokenStream instead of a String. init() uses streamingChatLanguageModel() instead of chatLanguageModel(). chatStream() returns a Flux<String> instead of a String. Update App.tsx with the following content: TypeScript-JSX export default function App() { const [messages, setMessages] = useState<MessageListItem[]>([]); function addMessage(message: MessageListItem) { setMessages((messages) => [...messages, message]); } function appendToLastMessage(chunk: string) { setMessages((messages) => { const lastMessage = messages[messages.length - 1]; lastMessage.text += chunk; return [...messages.slice(0, -1), lastMessage]; }); } async function sendMessage(message: string) { addMessage({ text: message, userName: "You", }); let first = true; ChatService.chatStream(message).onNext((chunk) => { if (first && chunk) { addMessage({ text: chunk, userName: "Assistant", }); first = false; } else { appendToLastMessage(chunk); } }); } return ( <div className="p-m flex flex-col h-full box-border"> <MessageList items={messages} className="flex-grow" /> <MessageInput onSubmit={(e) => sendMessage(e.detail.value)} /> </div> ); } The template is the same as before, but the way we handle the response is different. Instead of waiting for the response to be received, we start listening for chunks of the response. When the first chunk is received, we add it as a new message. When subsequent chunks are received, we append them to the last message. Re-run the application, and you should see that the response is displayed as it is received. Conclusion As you can see, LangChain makes it easy to build LLM-powered AI applications in Java and Spring Boot. With the basic setup in place, you can extend the functionality by chaining operations, adding external tools, and more following the examples on the LangChain4j GitHub page, linked earlier in this article. Learn more about Hilla in the Hilla documentation.
It looks like Java 21 is going to pose a strong challenge to Node JS! There are two massive performance enhancements in Java 21, and they address two of Java's often-criticized areas: Threads and blocking IO (somewhat fair criticism) and GC (relatively unfair criticism?) Major highlights of Java 21: Project Loom and virtual threads ZGC (upgraded) 1. Virtual Threads For the longest time, we have looked at non-blocking IO, async operations, and then Promises and Async/Await for orchestrating the async operations. So, we have had to deal with callbacks, and do things like Promises.all() or CompletableFuture.thenCompose() to join several async operations and process the results. More recently, Reactive frameworks have come into the picture to "compose" tasks as functional pipelines and then run them on thread pools or executors. The reactive functional programming is much better than "callback hell", and thus, we were forced to move to a functional programming model so that non-blocking/async can be done in an elegant way. Virtual Threads are bringing an end to callbacks and promises. Java team has been successful in providing an almost-drop-in-replacement for Threads with dirt-cheap Virtual Threads. So, even if you do the old Thread.sleep(5000) the virtual thread will detach instead of blocking. In terms of numbers, a regular laptop can do 2000 to 5000 threads whereas the same machine can do 1 Million + virtual threads. In fact, the official recommendation is to avoid the pooling of virtual threads. Every task is recommended to be run on a new virtual thread. Virtual threads support everything - sleep, wait, ThreadLocal, Locks, etc. Virtual Threads allow us to just write regular old iterative and "seemingly blocking" code, and let Java detach/attach real threads so that it becomes non-blocking and high-performance. However, we still need to wait for Library/Framework implementers like Apache Tomcat and Spring to move everything to Virtual Threads from native Threads. Once the frameworks complete the transition, all Java microservices/monoliths that use these upgraded frameworks will become non-blocking automatically. Take the example of some of the thread pools we encounter in our applications - Apache Tomcat NIO has 25 - 50 worker threads. Imagine NIO can have 50,000 virtual threads. Apache Camel listener usually has 10-20 threads. Imagine Camel can have 1000-2000 virtual threads. Of course, there are no more thread pools with virtual threads - so, they will just have unlimited 1000s of threads. This just about puts a full stop to "thread starvation" in Java. Just by upgrading to Frameworks / Libraries that fully take advantage of Java 21, all our Java microservices will become non-blocking simply with existing code. (Caveat: some operations like synchronized will block virtual threads also. However, if we replace them with virtual-threads-supported alternatives like Lock.lock(), then virtual threads will be able to detach and do other tasks till the lock is acquired. For this, a little bit of code change is needed from Library authors, and also in some cases in project code bases to get the benefits of virtual threads). 2. ZGC ZGC now supports Terabyte-size Java Heaps with permanent sub-millisecond pauses. No important caveats... it may use say 5-10% more memory or 5-10% slower allocation speed, but no more stop-the-world GC pauses and no more heap size limits. Together these two performance improvements are going to strengthen Java's position among programming languages. It may pause the rising dominance of Node JS and to some extent Reactive programming. Reactive/Functional programming may still be good for code-readability and for managing heavily event-driven applications, but we don't need reactive programming to do non-blocking IO in Java anymore.
Welcome to our thorough Spring Boot interview questions guide! Spring Boot has grown in popularity in the Java ecosystem due to its ease of use and productivity boosts for developing Java applications. This post will present you with a curated set of often requested Spring Boot interview questions to help you ace your interviews, whether you are a newbie discovering Spring Boot or an experienced developer preparing for an interview. What Is Spring Boot? Spring Boot is an open-source Java framework built on top of the Spring framework. Spring Boot aims to make it easier to create stand-alone, production-ready applications with minimal setup and configuration. What Are the Major Advantages of Spring Boot? Spring Boot offers several significant advantages for Java application development. Some of the key advantages include: Simplified configuration: Spring Boot eliminates the need for manual configuration by providing sensible defaults and auto-configuration. It reduces the boilerplate code and enables developers to focus more on business logic. Rapid application development: Spring Boot provides a range of productivity-enhancing features, such as embedded servers, automatic dependency management, and hot reloading. These features accelerate development and reduce time-to-market. Opinionated approach: Spring Boot follows an opinionated approach, providing predefined conventions and best practices. It promotes consistency and reduces the cognitive load of making configuration decisions. Microservices-friendly: Spring Boot seamlessly integrates with Spring Cloud, facilitating the development of microservices architectures. It offers built-in support for service discovery, distributed configuration, load balancing, and more. Production-ready features: Spring Boot Actuator provides a set of production-ready features for monitoring, health checks, metrics, and security. It allows developers to gain insights into application performance and monitor system health easily. Embedded servers: Spring Boot comes with embedded servers like Tomcat, Jetty, and Undertow. This eliminates the need for manual server setup and configuration. Auto-configuration: Spring Boot’s auto-configuration feature automatically configures the application based on classpath dependencies. It simplifies the setup process and reduces the manual configuration effort. Community support: Spring Boot has a large and active community of developers. This means there is a wealth of resources, documentation, and community support available for troubleshooting and sharing best practices. Ecosystem integration: Spring Boot seamlessly integrates with other Spring projects and third-party libraries. It leverages the powerful Spring ecosystem, allowing developers to utilize a wide range of tools and libraries for various purposes. Testability: Spring Boot provides excellent support for testing, including unit testing, integration testing, and end-to-end testing. It offers features like test slices, mock objects, and easy configuration for different testing frameworks. What Are the Key Components of Spring Boot? Spring Boot incorporates several key components that work together to provide a streamlined and efficient development experience. The major components of Spring Boot are: Auto-configuration: Spring Boot’s auto-configuration feature automatically configures the application based on the dependencies detected on the classpath. It eliminates the need for manual configuration and reduces boilerplate code. Starter dependencies: Spring Boot provides a set of starter dependencies, which are pre-packaged dependencies that facilitate the configuration of common use cases. Starters simplify dependency management and help developers get started quickly with essential features such as web applications, data access, security, testing, and more. Embedded servers: Spring Boot includes embedded servlet containers like Tomcat, Jetty, and Undertow. These embedded servers allow developers to package the application as an executable JAR file, simplifying deployment and making the application self-contained. Spring Boot actuator: Spring Boot Actuator provides production-ready features for monitoring and managing the application. It offers endpoints for health checks, metrics, logging, tracing, and more. The actuator enables easy monitoring and management of the application in a production environment. Spring Boot CLI: The Spring Boot Command-Line Interface (CLI) allows developers to interact with Spring Boot applications using a command-line interface. It provides a convenient way to quickly prototype, develop, and test Spring Boot applications without the need for complex setup and configuration. Spring Boot DevTools: Spring Boot DevTools is a set of tools that enhance the development experience. It includes features like automatic application restart, live reload of static resources, and enhanced error reporting. DevTools significantly improves developer productivity and speeds up the development process. Spring Boot testing: Spring Boot provides excellent support for testing applications. It offers various testing utilities, including test slices, mock objects, and easy configuration for different testing frameworks. Spring Boot Testing makes it easier to write and execute tests for Spring Boot applications. What Are the Differences Between Spring and Spring Boot? Here’s a comparison between Spring and Spring Boot: Feature Spring Spring Boot Configuration Requires manual configuration Provides auto-configuration and sensible defaults Dependency Management Manual dependency management Simplified dependency management with starters XML Configuration Relies heavily on XML configuration Encourages the use of annotations and Java configuration Embedded Servers Requires manual setup and configuration Includes embedded servers for easy deployment Auto-configuration Limited auto-configuration capabilities Powerful auto-configuration for rapid development Development Time Longer development setup and configuration Faster development with out-of-the-box defaults Convention Over Configuration Emphasizes configuration Emphasizes convention over configuration Microservices Support Supports microservices architecture Provides seamless integration with Spring Cloud Testing Support Strong testing support with Spring Test Enhanced testing support with Spring Boot Test Actuator Actuator available as a separate module Actuator integrated into the core framework It’s important to note that Spring and Spring Boot are not mutually exclusive. Spring Boot is built on top of the Spring framework and provides additional capabilities to simplify and accelerate Spring application development. What Is the Purpose of the @SpringBootApplication Annotation? The @SpringBootApplication annotation combines three other commonly used annotations: @Configuration, @EnableAutoConfiguration, and @ComponentScan. This single annotation allows for concise and streamlined configuration of the application. The @SpringBootApplication annotation is typically placed on the main class of a Spring Boot application. It acts as an entry point for the application and bootstraps the Spring Boot runtime environment. What Are Spring Boot Starters? Spring Boot starters are curated collections of pre-configured dependencies that simplify the setup and configuration of specific functionalities in a Spring Boot application. They provide the necessary dependencies, sensible default configurations, and auto-configuration. For instance, the spring-boot-starter-web starter includes dependencies for web-related libraries and provides default configurations for handling web requests. Starters streamline dependency management and ensure that the required components work together seamlessly. By including a starter in your project, you save time and effort by avoiding manual configuration and gain the benefits of Spring Boot’s opinionated approach to application development. Examples of Commonly Used Spring Boot Starters Here are some examples of commonly used Spring Boot starters: spring-boot-starter-web: This starter is used for developing web applications with Spring MVC. It includes dependencies for handling HTTP requests, managing web sessions, and serving static resources. spring-boot-starter-data-jpa: This starter provides support for data access using Java Persistence API (JPA) with Hibernate as the default implementation. It includes dependencies for database connectivity, entity management, and transaction management. spring-boot-starter-security: This starter is used for adding security features to a Spring Boot application. It includes dependencies for authentication, authorization, and secure communication. spring-boot-starter-test: This starter is used for writing unit tests and integration tests in a Spring Boot application. It includes dependencies for testing frameworks like JUnit, Mockito, and Spring Test. spring-boot-starter-actuator: This starter adds production-ready features to monitor and manage the application. It includes endpoints for health checks, metrics, logging, and more. spring-boot-starter-data-redis: This starter is used for working with Redis, an in-memory data store. It includes dependencies for connecting to Redis server and performing data operations. spring-boot-starter-amqp: This starter provides support for messaging with Advanced Message Queuing Protocol (AMQP). It includes dependencies for messaging components like RabbitMQ. spring-boot-starter-mail: This starter is used for sending emails in a Spring Boot application. It includes dependencies for email-related functionalities. spring-boot-starter-cache: This starter provides support for caching data in a Spring Boot application. It includes dependencies for caching frameworks like Ehcache and Redis. spring-boot-starter-oauth2-client: This starter is used for implementing OAuth 2.0 client functionality in a Spring Boot application. It includes dependencies for interacting with OAuth 2.0 authorization servers. The entire list of Spring Boot starters can be found on the official Spring Boot website. Here’s the link to the official Spring Boot Starters documentation: Spring Boot Starters. What Is the Default Embedded Server Used by Spring Boot? The default embedded server used by Spring Boot is Apache Tomcat. Spring Boot includes Tomcat as a dependency and automatically configures it as the default embedded server when you use the spring-boot-starter-web starter or any other web-related starters. How Do You Configure Properties in a Spring Boot Application? In a Spring Boot application, properties can be configured using various methods. Here are the commonly used approaches: application.properties or application.yml file: Spring Boot allows you to define configuration properties in an application.properties file (for properties in key-value format) or an application.yml file (for properties in YAML format). Command-line arguments: Spring Boot supports configuring properties using command-line arguments. You can pass properties as command-line arguments in the format --property=valuewhen running the application. For example: java -jar myapp.jar --server.port=8080. Environment variables: Spring Boot can read properties from environment variables. You can define environment variables with property names and values, and Spring Boot will automatically map them to the corresponding properties. System properties: Spring Boot also supports configuring properties using system properties. You can pass system properties to the application using the -D flag when running the application. For example: java -jar myapp.jar -Dserver.port=8080. @ConfigurationProperties annotation: Spring Boot provides the @ConfigurationPropertiesannotation, which allows you to bind external properties directly to Java objects. You can define a configuration class annotated with @ConfigurationProperties and specify the prefix of the properties to be bound. Spring Boot will automatically map the properties to the corresponding fields of the configuration class. What Is Spring Boot Auto-Configuration, and How Does It Work? Spring Boot Auto-configuration automatically configures the application context based on the classpath dependencies, reducing the need for manual configuration. It scans the classpath for required libraries and sets up the necessary beans and components. Auto-configuration follows predefined rules and uses annotations like @ConditionalOnClass and @ConditionalOnMissingBean to enable configurations selectively. By adding the @EnableAutoConfiguration annotation to the main application class, Spring Boot triggers the auto-configuration process. Auto-configuration classes are typically packaged in starters, which contain the necessary configuration classes and dependencies. Including a starter in your project enables Spring Boot to automatically configure relevant components. The benefits of Spring Boot auto-configuration include: Reduced boilerplate code: Auto-configuration eliminates the need for manual configuration, reducing the amount of boilerplate code required to set up common functionalities. Opinionated defaults: Auto-configuration provides sensible defaults and conventions based on best practices. This allows developers to quickly get started with Spring Boot projects without spending time on manual configuration. Integration with third-party libraries: Auto-configuration seamlessly integrates with popular libraries and frameworks, automatically configuring the necessary beans and components required for their usage. Conditional configuration: Auto-configuration applies configurations conditionally based on the presence or absence of specific classes or beans. This ensures that conflicting configurations are avoided, and only relevant configurations are applied. How Can You Create a Spring Boot Application Using Spring Initializer? Creating a Spring Boot application is made easy by utilizing the Spring Initializr. The Spring Initializr is a web-based tool that generates a project template with all the necessary dependencies and configurations for a Spring Boot application. To create a Spring Boot application easily using the Spring Initializr, you can follow these steps: Visit the Spring Initializr Website: Go to the official Spring Initializr website at start.spring.io. Configure Project Settings: On the Spring Initializr website, you’ll find various options to configure your project. Provide the following details: Project: Select the project type (Maven or Gradle). Language: Choose the programming language (Java or Kotlin). Spring Boot Version: Select the desired version of Spring Boot. Project Metadata: Specify the group, artifact, and package name for your project. Add Dependencies: In the “Dependencies” section, you can search for and select the dependencies you need for your project. The Spring Initializr provides a wide range of options, such as Spring Web, Spring Data JPA, Spring Security, and more. You can also search for specific dependencies in the search bar. Generate the Project: Once you’ve configured the project settings and added the desired dependencies, click on the “Generate” button. The Spring Initializr will generate a downloadable project archive (a ZIP file) based on your selections. Extract the Project: Download the generated ZIP file and extract it to your desired location on your computer. Import the Project in your IDE: Open your preferred IDE (e.g., IntelliJ IDEA, Eclipse, or Visual Studio Code) and import the extracted project as a Maven or Gradle project. Start Developing: With the project imported, you can start developing your Spring Boot application. Add your application logic, create controllers, services, repositories, and other necessary components to implement your desired functionality. Run the Application: Use your IDE’s run configuration or command-line tools to run the Spring Boot application. The application will start, and you can access it using the provided URLs or endpoints. What Are Spring Boot Actuators? What Are Different Actuator Endpoints? Spring Boot Actuators are a set of production-ready management and monitoring tools provided by the Spring Boot framework. They enable you to monitor and interact with your Spring Boot application at runtime, providing valuable insights into its health, metrics, and various other aspects. Actuators expose a set of RESTful endpoints that allow you to access useful information and perform certain operations on your Spring Boot application. Some of the commonly used endpoints include: /actuator/health: Provides information about the health of your application, indicating whether it is up and running or experiencing any issues. /actuator/info: Displays general information about your application, which can be customized to include details such as version, description, and other relevant metadata. /actuator/metrics: Provides metrics about various aspects of your application, such as memory usage, CPU usage, request/response times, and more. These metrics can be helpful for monitoring and performance analysis. /actuator/env: Shows the current environment properties and their values, including configuration properties from external sources like application.properties or environment variables. /actuator/loggers: Allows you to view and modify the logging levels of your application’s loggers dynamically. This can be useful for troubleshooting and debugging purposes. /actuator/mappings: Displays a detailed mapping of all the endpoints exposed by your application, including the HTTP methods supported by each endpoint. /actuator/beans: Provides a complete list of all the Spring beans in your application, including information such as their names, types, and dependencies. Explain the Concept of Spring Boot Profiles and How They Can Be Used Spring Boot profiles provide a way to manage application configurations for different environments or deployment scenarios. With profiles, you can define different sets of configurations for development, testing, production, and any other specific environment. Here’s a brief explanation of how Spring Boot profiles work and how they can be used: Defining profiles: In a Spring Boot application, you can define profiles by creating separate properties files for each environment. For example, you can have application-dev.properties for the development environment, application-test.properties for testing and application-prod.properties for production. Activating profiles: Profiles can be activated in various ways: By setting the spring.profiles.active property in application.properties or as a command-line argument when starting the application. By using the @ActiveProfiles annotation at the class level in your tests. By using system environment variables or JVM system properties to specify the active profiles. Profile-specific configurations: Once a profile is activated, Spring Boot will load the corresponding property files and apply the configurations defined in them. For example, if the devprofile is active. Spring Boot will load the application-dev.properties file and apply the configurations defined within it. Overriding configurations: Profile-specific configurations can override the default configurations defined in application.properties or other property files. This allows you to customize certain settings specifically for each environment without modifying the core application code. Bean and component scanning: Profiles can also be used to control the bean and component scanning process. You can annotate beans or components with @Profile to specify that they should only be created and registered when a specific profile is active. Spring Boot’s default profiles: Spring Boot provides some default profiles, such as default, dev, test, and prod. The default profile is always active by default, and other profiles can be activated based on the deployment environment. What Are Some Commonly Used Annotations in Spring Boot? Here are some commonly used Spring Boot annotations: @SpringBootApplication: This annotation is used to mark the main class of a Spring Boot application. It combines three annotations: @Configuration, @EnableAutoConfiguration, and @ComponentScan. It enables auto-configuration and component scanning in your application. @RestController: This annotation is used to indicate that a class is a RESTful controller. It combines the @Controller and @ResponseBody annotations. It simplifies the development of RESTful web services by eliminating the need to annotate each method with @ResponseBody. @RequestMapping: This annotation is used to map HTTP requests to specific handler methods. It can be applied at the class level to specify a base URL for all methods within the class or at the method level to define the URL and HTTP method for a specific handler method. @Autowired: This annotation is used to automatically wire dependencies into your Spring-managed beans. It allows Spring to automatically discover and inject the required beans without the need for explicit bean configuration. @Value: This annotation is used to inject values from properties files or environment variables into Spring Beans. It can be used to inject simple values like strings or numbers, as well as complex objects. @Configuration: This annotation is used to indicate that a class provides configuration to the Spring application context. It is often used in conjunction with @Bean to define beans and other configuration elements. @ComponentScan: This annotation is used to specify the base packages to scan for Spring components, such as controllers, services, and repositories. It allows Spring to automatically discover and register these components in the application context. @EnableAutoConfiguration: This annotation enables Spring Boot’s auto-configuration mechanism, which automatically configures various components and beans based on the dependencies and the classpath. @Conditional: This annotation is used to conditionally enable or disable beans and configurations based on certain conditions. It allows you to customize the behavior of your application based on specific conditions or environment settings. Intermediate Level Spring Boot Interview Questions What Is the Use of @ConfigurationProperties Annotation? The @ConfigurationProperties annotation in Spring Boot is used to bind external configuration properties to a Java class. It provides a convenient way to map the properties defined in configuration files (such as application.properties or application.yml) to corresponding fields in a configuration class. The benefits of using @ConfigurationProperties include: Type safety: The annotation ensures that the configuration properties are bound to the appropriate types defined in the configuration class, preventing type mismatches and potential runtime errors. Property validation: You can validate the properties using various validation annotations provided by Spring, such as @NotNull, @Min, @Max, and custom validations. Hierarchical property mapping: You can define nested configuration classes to represent complex configuration structures and map them hierarchically to the corresponding properties. Easy integration: The annotated configuration class can be easily autowired and used throughout the application, simplifying the retrieval of configuration values in different components. Here’s an example of using @ConfigurationProperties: @Configuration @ConfigurationProperties(prefix = "myapp") public class MyAppConfiguration { private String name; private int port; // Getters and setters // Other custom methods or business logic } application.properties# Database Configuration spring.datasource.url=jdbc:mysql://localhost:3306/mydatabase spring.datasource.username=myusername spring.datasource.password=mypassword # Server Configuration server.port=8080 server.servlet.context-path=/myapp # Custom Application Properties myapp.name=My Application myapp.api.key=abc123 In this example, the MyAppConfiguration class is annotated with @ConfigurationProperties and specifies a prefix of “myapp”. The properties defined with the prefix “myapp” in the configuration files will be bound to the corresponding fields in this class. How Does Spring Boot Support Microservices Architecture? Spring Boot provides extensive support for building microservices-based applications. It offers a range of features and integrations that simplify the development, deployment, and management of microservices. Here’s how Spring Boot supports the microservices architecture: Spring Cloud: Spring Boot integrates seamlessly with Spring Cloud, which is a set of tools and frameworks designed to build and operate cloud-native microservices. Spring Cloud provides capabilities such as service discovery, client-side load balancing, distributed configuration management, circuit breakers, and more. Microservice design patterns: Spring Boot embraces microservice design patterns, such as the use of RESTful APIs for communication between services, stateless services for scalability, and decentralized data management. It provides a lightweight and flexible framework that enables developers to implement these patterns easily. Service registration and discovery: Spring Boot integrates with service registry and discovery tools, such as Netflix Eureka and Consul. These tools allow microservices to register themselves with the registry and discover other services dynamically. This helps in achieving service resilience, load balancing, and automatic service discovery. Externalized configuration: Spring Boot supports externalized configuration, allowing microservices to be easily configured based on the environment or specific deployment needs. It enables the separation of configuration from the code, making it easier to manage configuration properties across multiple microservices. Distributed tracing and monitoring: Spring Boot integrates with distributed tracing systems like Zipkin and Sleuth, enabling the tracing of requests across multiple microservices. It also provides integrations with monitoring tools like Prometheus and Grafana to monitor the health, performance, and resource usage of microservices. Resilience and fault tolerance: Spring Boot includes support for implementing fault-tolerant microservices using features such as circuit breakers (e.g., Netflix Hystrix), which help prevent cascading failures in distributed systems. It also provides mechanisms for handling retries, timeouts, and fallbacks in microservice interactions. Containerization and deployment: Spring Boot applications can be easily containerized using technologies like Docker, allowing for seamless deployment and scaling of microservices using container orchestration platforms like Kubernetes. What Is Spring Data? What Are Different Spring Data Starters Used in Spring Boot? Spring Data is a subproject of the Spring Framework that simplifies data access by providing a unified programming model for different data storage technologies. It reduces boilerplate code and allows developers to focus on business logic. Spring Data supports relational databases, NoSQL databases, and more. It utilizes repositories to abstract data access operations, eliminating the need for manual CRUD code. Spring Data’s starters offer pre-configured dependencies and auto-configuration for specific databases, streamlining the setup process. With Spring Data, developers can easily interact with data sources and benefit from its powerful query capabilities. Here are some examples of Spring Data starters for different types of databases: spring-boot-starter-data-jpa: This starter provides support for Java Persistence API (JPA) and Hibernate. It includes the necessary dependencies and configurations for working with relational databases using JPA. spring-boot-starter-data-mongodb: This starter provides support for MongoDB, a popular NoSQL database. It includes the necessary dependencies and configurations for working with MongoDB using Spring Data MongoDB. spring-boot-starter-data-redis: This starter provides support for Redis, an in-memory data structure store. It includes the necessary dependencies and configurations for working with Redis using Spring Data Redis. spring-boot-starter-data-cassandra: This starter provides support for Apache Cassandra, a highly scalable NoSQL database. It includes the necessary dependencies and configurations for working with Cassandra using Spring Data Cassandra. spring-boot-starter-data-elasticsearch: This starter provides support for Elasticsearch, a distributed search and analytics engine. It includes the necessary dependencies and configurations for working with Elasticsearch using Spring Data Elasticsearch. How Can You Consume RESTful Web Services in a Spring Boot Application? In a Spring Boot application, you can consume RESTful web services using RestTemplate or WebClient. RestTemplate provides a synchronous API for making HTTP requests, while WebClient offers a non-blocking and reactive approach. Both allow you to send GET, POST, PUT, DELETE requests, handle response data, and deserialize it into Java objects. How Can You Create and Run Unit Tests for a Spring Boot Application? In a Spring Boot application, you can create and run unit tests using the Spring Test framework. By leveraging annotations such as @RunWith(SpringRunner.class) and @SpringBootTest, you can initialize the application context and perform tests on beans and components. Additionally, you can use Mockito or other mocking frameworks to mock dependencies and isolate the units under test. With the help of assertions from JUnit or AssertJ, you can verify expected behavior and assertions. Finally, running the tests can be done using tools like Maven or Gradle, which execute the tests and provide reports on test results and coverage. How To Enable Debugging Log in the Spring Boot Application? To enable debugging logs in a Spring Boot application, you can set the logging.level property in the application.properties or application.yml file to “DEBUG.” This configuration will enable the logging framework to output detailed debug information. Alternatively, you can use the @Slf4j annotation on a class to enable logging for that specific class. Additionally, you can configure logging levels for specific packages or classes by setting the logging.level.{package/class} property in the configuration file. To enable debugging logs for the entire application, use:logging.level.<package-name>=DEBUG To enable debugging logs for the entire application, use:logging.level.root=DEBUG How Reactive Programming Is Supported in Spring Boot Spring Boot provides support for reactive programming through its integration with the Spring WebFlux module. It allows developers to build non-blocking, event-driven applications that can handle a high volume of concurrent requests efficiently, making use of reactive streams and the reactive programming model. How Do You Enable Security in Spring Boot Application? We can use the following different options. Using Spring Security: Enable security by adding the Spring Security Starter dependency to your project’s build configuration. OAuth2 and OpenID Connect: Enable security using OAuth2 and OpenID Connect protocols for secure authentication and authorization. LDAP Integration: Enable security by integrating with an LDAP (Lightweight Directory Access Protocol) server for user authentication and authorization. JWT (JSON Web Token) Authentication: Enable security by implementing JWT-based authentication for stateless and scalable authentication. Custom Authentication Providers: Enable security by creating custom authentication providers to handle authentication based on your own logic. What Is the Purpose of Spring Boot DevTools? How Do We Enable It? It is designed to improve productivity, streamline development workflows, and enable quick application restarts during the development phase. Here are some key features and benefits of Spring Boot DevTools: Automatic restart: DevTools monitors the classpath for any changes and automatically restarts the application when it detects modifications. Live reload: DevTools supports live reloading of static resources such as HTML, CSS, and JavaScript files. Remote application restart: DevTools provides the ability to remotely trigger a restart of the application. This can be useful when working in a distributed environment or when the application is running on a remote server. Developer-friendly error page: DevTools provides an enhanced error page that provides detailed information about exceptions and errors encountered during application development. Configuration properties support: DevTools enables hot-reloading of Spring Boot configuration properties. Changes made to application.properties or application.yml files are automatically applied without restarting the application, allowing for quick configuration updates. Database console: DevTools includes an embedded database console that provides a web-based interface to interact with the application’s database. This allows developers to easily execute SQL queries, view and modify data, and perform other database-related tasks without requiring external tools. To enable Spring Boot DevTools in your Spring Boot application, you need to include the appropriate dependencies and configurations. Here are the steps to enable DevTools: Add the DevTools Dependency: In your pom.xml file (for Maven) or build.gradle file (for Gradle), add the following dependency:<!-- Maven --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> // Gradle implementation 'org.springframework.boot:spring-boot-devtools' Enable Automatic Restart: By default, DevTools is enabled for applications run using the spring-boot:run command or from an IDE. However, you can also enable it for packaged applications by adding the following configuration property to your application.properties or application.ymlfile:spring.devtools.restart.enabled=true Advanced Level Spring Boot Interview Questions for Experienced Folks How Can You Enable HTTPS in a Spring Boot Application? To enable HTTPS in a Spring Boot application, you need to configure the application’s server properties and provide the necessary SSL/TLS certificates. Here are the general steps to enable HTTPS: Obtain SSL/TLS Certificates: Acquire the SSL/TLS certificates from a trusted certificate authority (CA) or generate a self-signed certificate for development/testing purposes. Configure Server Properties: Update the application.properties or application.yml file with the following server configurations: server.port=8443 server.ssl.key-store-type=PKCS12 server.ssl.key-store=classpath:keystore.p12 server.ssl.key-store-password=your_keystore_password server.ssl.key-alias=your_alias_name In the above example, replace keystore.p12 with the path to your keystore file, and set the appropriate password and alias values. Provide SSL/TLS Certificates: Place the SSL/TLS certificate file (keystore.p12) in the classpath or specify the absolute file path in the server properties. Restart the Application: Restart your Spring Boot application for the changes to take effect. After completing these steps, your Spring Boot application will be configured to use HTTPS. You can access the application using https://localhost:8443 or the appropriate hostname and port specified in the server configuration. How to Configure External Configuration in Spring Boot To configure external configuration outside the project in Spring Boot, you can use one of the following approaches: External configuration files: Instead of placing the application.properties or application.ymlfile within the project, you can specify the location of an external configuration file using the spring.config.name and spring.config.location properties. For example, you can place the configuration file in a separate directory and provide its location through the command line or environment variable. Operating system environment variables: You can leverage the environment variables provided by the operating system to configure your Spring Boot application. Define the required configuration properties as environment variables and access them in your application using the @Value annotation or the Environment object. Spring cloud config: If you have a more complex configuration setup or need centralized configuration management, you can use Spring Cloud Config. It provides a server-side component where you can store and manage configurations for multiple applications. Your Spring Boot application can then fetch its configuration from the Spring Cloud Config server. Configuration servers: Another option is to use external configuration servers like Apache ZooKeeper or HashiCorp Consul. These servers act as central repositories for configurations and can be accessed by multiple applications. How Do You Create a Spring Boot Application Using Maven? To create a Spring Boot application using Maven, follow these steps: Set Up Maven: Ensure that Maven is installed on your system. You can download Maven from the Apache Maven website and follow the installation instructions. Create a Maven Project: Open your command line or terminal and navigate to the directory where you want to create your project. Use the following Maven command to create a new project:mvn archetype:generate -DgroupId=com.example -DartifactId=my-spring-boot-app -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false This command creates a new Maven project with the specified groupId and artifactId. Adjust these values according to your project’s needs. Add Spring Boot Starter Dependency: Open the pom.xml file of your project and add the following dependency for Spring Boot:<dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> <version>2.5.2</version> </dependency> </dependencies> This dependency includes the necessary Spring Boot libraries for your application. Create a Spring Boot Main Class: Create a new Java class in the appropriate package of your Maven project. This class will serve as the entry point for your Spring Boot application. Here’s an example:package com.example; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class MySpringBootApplication { public static void main(String[] args) { SpringApplication.run(MySpringBootApplication.class, args); } } The @SpringBootApplication annotation combines several annotations required for a basic Spring Boot application. Build and Run the Application: Use the following Maven command to build and run your Spring Boot application:mvn spring-boot:run Maven will build your project, resolve the dependencies, and start the Spring Boot application. Once the application is running, you can access it in your web browser using http://localhost:8080(by default) or the specified port if you have customized it. That’s it! You have created a Spring Boot application using Maven. You can now add additional dependencies, configure your application, and develop your desired functionality. What Are Different Types of Conditional Annotations? Some of the commonly used conditional annotations in Spring Boot are: @ConditionalOnClass: This annotation configures a bean or component if a specific class is present in the classpath. @ConditionalOnMissingClass: This annotation configures a bean or component if a specific class is not present in the classpath. @ConditionalOnBean: This annotation configures a bean or component only if another specific bean is present in the application context. @ConditionalOnMissingBean: This annotation configures a bean or component only if another specific bean is not present in the application context. @ConditionalOnProperty: This annotation configures a bean or component based on the values of specific properties in the application configuration files. @ConditionalOnExpression: This annotation configures a bean or component based on a SpEL (Spring Expression Language) expression. @ConditionalOnWebApplication: This annotation configures a bean or component only if the application is a web application. @ConditionalOnNotWebApplication: This annotation configures a bean or component only if the application is not a web application. Could You Provide an Explanation of Spring Boot Actuator Health Checks and the Process for Creating Custom Health Indicators? Health checks provide valuable information about the application’s overall health, such as database connectivity, external service availability, or any other custom checks you define. By default, Spring Boot Actuator provides a set of predefined health indicators that check the health of various components like the database, disk space, and others. However, you can also create custom health indicators to monitor specific aspects of your application. To create a custom health indicator, you need to implement the HealthIndicator interface and override the health() method. The health() method should return an instance of Health, which represents the health status of your custom component. You can use the Health class to indicate whether the component is up, down, or in an unknown state. Here’s an example of a custom health indicator:import org.springframework.boot.actuate.health.Health; import org.springframework.boot.actuate.health.HealthIndicator; import org.springframework.stereotype.Component; @Component public class CustomHealthIndicator implements HealthIndicator { @Override public Health health() { // Perform your custom health check logic here boolean isHealthy = true; // Replace with your actual health check logic if (isHealthy) { return Health.up().build(); } else { return Health.down().withDetail("CustomComponent", "Not Healthy").build(); } } } In this example, the CustomHealthIndicator class implements the HealthIndicator interface and overrides the health() method. Inside the health() method, you can write your custom health check logic. If the component is healthy, you can return Health.up(). Otherwise, you can return Health.down() along with additional details using the withDetail() method. Once you create a custom health indicator, it will be automatically detected by Spring Boot Actuator, and its health check will be exposed through the /actuator/health endpoint. How Do You Create Custom Actuator Endpoints in Spring Boot? To create custom Actuator endpoints in Spring Boot, you can follow these steps: Create a custom endpoint: Create a new class that represents your custom endpoint. This class should be annotated with @Endpoint to indicate that it is an Actuator endpoint. You can also use additional annotations like @ReadOperation, @WriteOperation, or @DeleteOperation to define the type of operations supported by your endpoint. Define endpoint operations: Inside your custom endpoint class, define the operations that your endpoint should perform. You can use annotations like @ReadOperation for read-only operations, @WriteOperation for write operations, and @DeleteOperation for delete operations. These annotations help in defining the HTTP methods and request mappings for your endpoint. Implement endpoint logic: Implement the logic for each operation of your custom endpoint. This can include retrieving information, modifying the application state, or performing any other desired actions. You have the flexibility to define the functionality based on your specific requirements. (Optional) add security configuration: If your custom endpoint requires security restrictions, you can configure it by adding security annotations or by modifying the security configuration of your application. Here’s an example of creating a custom Actuator endpoint:import org.springframework.boot.actuate.endpoint.annotation.Endpoint; import org.springframework.boot.actuate.endpoint.annotation.ReadOperation; import org.springframework.boot.actuate.endpoint.annotation.WriteOperation; import org.springframework.stereotype.Component; @Component @Endpoint(id = "customEndpoint") public class CustomEndpoint { @ReadOperation public String getInformation() { // Retrieve and return information return "This is custom endpoint information."; } @WriteOperation public void updateInformation(String newInformation) { // Update information // ... } } In this example, the CustomEndpoint class is annotated with @Endpoint to define it as an Actuator endpoint. It has two operations: getInformation() annotated with @ReadOperation for retrieving information and updateInformation() annotated with @WriteOperation for updating information. After creating your custom endpoint, it will be automatically registered with Spring Boot Actuator, and you can access it through the /actuator base path along with the endpoint ID. In this case, the custom endpoint can be accessed via /actuator/customEndpoint. How Can You Enable CORS (Cross-Origin Resource Sharing) In a Spring Boot Application? To enable Cross-Origin Resource Sharing (CORS) in a Spring Boot application, you can follow these steps: Add CORS Configuration: Create a configuration class and annotate it with @Configuration to define CORS configuration. Inside the class, create a bean of type CorsFilter to configure CORS settings.import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.web.cors.CorsConfiguration; import org.springframework.web.cors.UrlBasedCorsConfigurationSource; import org.springframework.web.filter.CorsFilter; @Configuration public class CorsConfig { @Bean public CorsFilter corsFilter() { UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource(); CorsConfiguration config = new CorsConfiguration(); // Allow requests from any origin config.addAllowedOrigin("*"); // Allow specific HTTP methods (e.g., GET, POST, PUT, DELETE) config.addAllowedMethod("*"); // Allow specific HTTP headers config.addAllowedHeader("*"); source.registerCorsConfiguration("/**", config); return new CorsFilter(source); } } In this example, we configure CORS to allow requests from any origin (*), allow all HTTP methods (*), and allow all HTTP headers (*). You can customize these settings based on your specific requirements. Enable Web MVC Configuration: If you haven’t done so already, make sure to enable Web MVC configuration in your Spring Boot application by either using the @EnableWebMvc annotation on a configuration class or extending the WebMvcConfigurerAdapter class.import org.springframework.context.annotation.Configuration; import org.springframework.web.servlet.config.annotation.EnableWebMvc; @Configuration @EnableWebMvc public class WebMvcConfig extends WebMvcConfigurerAdapter { // Additional Web MVC configuration if needed } Test CORS Configuration: Once you have enabled CORS in your Spring Boot application, you can test it by making cross-origin requests to your endpoints. Ensure that the necessary CORS headers are included in the response, such as Access-Control-Allow-Origin, Access-Control-Allow-Methods, and Access-Control-Allow-Headers. Enabling CORS allows your Spring Boot application to handle cross-origin requests and respond appropriately. It’s important to consider the security implications and configure CORS settings based on your application’s requirements and security policies. How Can You Schedule Tasks in a Spring Boot Application? In a Spring Boot application, you can schedule tasks using the @Scheduled annotation provided by Spring’s Task Scheduling feature. Here’s how you can schedule tasks in a Spring Boot application: Enable Scheduling: First, make sure that task scheduling is enabled in your Spring Boot application. This can be done by either using the @EnableScheduling annotation on a configuration class or by adding the @SpringBootApplication annotation along with @EnableScheduling on your main application class.import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.scheduling.annotation.EnableScheduling; @SpringBootApplication @EnableScheduling public class MyAppApplication { public static void main(String[] args) { SpringApplication.run(MyAppApplication.class, args); } } Copy Create Scheduled Task Method: Define a method in your application that you want to schedule. Annotate the method with @Scheduled and specify the desired scheduling expression using cron, fixed delay, or fixed rate.import org.springframework.scheduling.annotation.Scheduled; import org.springframework.stereotype.Component; @Component public class MyScheduledTasks { @Scheduled(cron = "0 0 8 * * *") // Run at 8 AM every day public void executeTask() { // Logic for the scheduled task System.out.println("Scheduled task executed!"); } } In this example, the executeTask() method is scheduled to run at 8 AM every day based on the cron expression provided. Test the Scheduled Task: Once you have defined the scheduled task, you can start your Spring Boot application and observe the scheduled task executing based on the specified schedule. The @Scheduled annotation provides several options for specifying the scheduling expression, including cron expressions, fixed delays, and fixed rates. You can choose the most appropriate option based on your scheduling requirements. How Can You Enable Caching in a Spring Boot Application? To enable caching in a Spring Boot application, you can follow these steps: Add Caching Dependencies: Ensure that the necessary caching dependencies are included in your project’s dependencies. Spring Boot provides support for various caching providers such as Ehcache, Redis, and Caffeine. Add the corresponding caching dependency to your project’s pom.xml file. For example, to use the Ehcache caching provider, add the following dependency:<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-cache</artifactId> </dependency> <dependency> <groupId>net.sf.ehcache</groupId> <artifactId>ehcache</artifactId> </dependency> Enable Caching: To enable caching in your Spring Boot application, add the @EnableCachingannotation to your configuration class. This annotation enables Spring’s caching infrastructure and prepares the application for caching.import org.springframework.cache.annotation.EnableCaching; import org.springframework.context.annotation.Configuration; @Configuration @EnableCaching public class CachingConfig { // Additional configuration if needed } Annotate Methods for Caching: Identify the methods in your application that you want to cache and annotate them with the appropriate caching annotations. Spring Boot provides annotations such as @Cacheable, @CachePut, and @CacheEvict for caching operations. For example, suppose you have a method that retrieves data from a database, and you want to cache the results. You can annotate the method with @Cacheable and specify the cache name.import org.springframework.cache.annotation.Cacheable; import org.springframework.stereotype.Service; @Service public class DataService { @Cacheable("dataCache") public Data getData(String key) { // Logic to fetch data from a database or external service return data; } } In this example, the getData() method is annotated with @Cacheable and specifies the cache name as “dataCache”. The first time this method is called with a specific key, the data will be fetched and cached. Subsequent calls with the same key will retrieve the data from the cache instead of executing the method. Configure Cache Settings: If you need to customize the caching behavior, you can provide additional configuration properties specific to your chosen caching provider. These configuration properties can be set in the application.properties or application.yml file. For example, if you are using Ehcache, you can configure the cache settings in the ehcache.xml file and specify the location of the file in the application.properties file. By following these steps, you can enable caching in your Spring Boot application and leverage the benefits of caching to improve performance and reduce database or external service calls. Conclusion In conclusion, these Spring Boot interview questions cover a wide range of topics related to Spring Boot, its features, and best practices. By familiarizing yourself with these questions and their answers, you can better prepare for Spring Boot interviews and demonstrate your knowledge and expertise in developing Spring Boot applications.
Are you struggling to keep the documentation of your Spring configuration properties in line with the code? In this blog, you will take a look at the Spring Configuration Property Documenter Maven plugin, which solves this issue for you. Enjoy! Introduction Almost every Spring (Boot) application makes use of configuration properties. These configuration properties ensure that certain items in your application can be configured by means of an application.properties (or YAML) file. However, there is also a need to document these properties in order for someone to know what these properties mean, how to use them, etc. This is often documented in a README file. This README file needs to be manually maintained while the properties are present in a Java class which also contains documentation and annotations. Wouldn’t it be great when the documentation was present at one location (the Java class, close to the code) and the documentation could be generated out of the code? Good news! This is exactly what the Spring Configuration Property Documenter Maven plugin will do for you! In the remainder of this post, you will explore some of the features of this Maven plugin and how you can easily incorporate it into your project. The official documentation is more elaborate and can be found here. The sources used in this blog can be found on GitHub. Sample Application First of all, you need to create a basic sample application. Navigate to Spring Initializr and select the dependencies Spring Web, Lombok, and Spring Configuration Processor. Annotate the main Spring Boot application class with @ConfigurationPropertiesScan. Java @SpringBootApplication @ConfigurationPropertiesScan("com.mydeveloperplanet.myspringconfigdocplanet.config") public class MySpringConfigDocPlanetApplication { public static void main(String[] args) { SpringApplication.run(MySpringConfigDocPlanetApplication.class, args); } } Create a configuration class MyFirstProperties in the package config. The configuration class makes use of constructor binding. See also a previous post, "Spring Boot Configuration Properties Explained," for more information about the different ways to create configuration properties. Java @Getter @ConfigurationProperties("my.first.properties") public class MyFirstProperties { private final String stringProperty; private final boolean booleanProperty; public MyFirstProperties(String stringProperty, boolean booleanProperty) { this.stringProperty = stringProperty; this.booleanProperty = booleanProperty; } } Also, a ConfigurationController is added to the package controller which returns the properties. This controller is only added as an example of how to use the properties. It will not be of any relevance to this blog. Build the application. Shell $ mvn clean verify Run the application. Shell $ mvn spring-boot:run Invoke the endpoint as configured in the ConfigurationController. Shell $ curl http://localhost:8080/configuration Take a look at the directory target/classes/META-INF. A file spring-configuration-metadata.json is present here, which contains metadata about the configuration classes. This information is used by the Spring Configuration Property Documenter Maven plugin in order to generate the documentation. This metadata file is generated because you added the Spring Configuration Processor as a dependency. XML <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-configuration-processor</artifactId> <optional>true</optional> </dependency> Generate Documentation The plugin is able to generate the documentation in four different formats: ASCII Doctor Markdown HTML XML In order to generate the documentation, you only have to add the plugin to the build section (next to adding the Spring Configuration Processor dependency). For each format type, an execution is added. If you only want documentation in markdown, just remove the other executions. XML <build> <plugins> <plugin> <groupId>org.rodnansol</groupId> <artifactId>spring-configuration-property-documenter-maven-plugin</artifactId> <version>0.6.1</version> <executions> <execution> <id>generate-adoc</id> <phase>process-classes</phase> <goals> <goal>generate-property-document</goal> </goals> <configuration> <type>ADOC</type> </configuration> </execution> <execution> <id>generate-markdown</id> <phase>process-classes</phase> <goals> <goal>generate-property-document</goal> </goals> <configuration> <type>MARKDOWN</type> </configuration> </execution> <execution> <id>generate-html</id> <phase>process-classes</phase> <goals> <goal>generate-property-document</goal> </goals> <configuration> <type>HTML</type> </configuration> </execution> <execution> <id>generate-xml</id> <phase>process-classes</phase> <goals> <goal>generate-property-document</goal> </goals> <configuration> <type>XML</type> </configuration> </execution> </executions> </plugin> </plugins> </build> The documentation will be generated when executing a build with Maven, but a quick way is to execute the process-classes goal. Shell $ mvn process-classes Or you can invoke a specific execution. Shell $ mvn spring-configuration-property-documenter:generate-property-document@generate-markdown Take a look at the directory target/property-docs. For each type, documentation for the configuration properties is added. The ASCII Doctor format The Markdown format The HTML format The XML format is a bit elaborate to display, but it contains an XML representation of the above. In case you have a Maven multi-module project, you can combine all the properties of the different modules into one file. How to do so is described in the documentation. Customize Output In the remainder of the post, you will continue with the markdown format. In the above screenshots, you notice that an Unknown Group is added. This group is also empty. By default, this group is always added to the output, but it is possible to remove it by means of the markdownCustomization parameter. There are many more customizations available which are listed in the documentation. In order to disable the Unknown Group in the output, you set the parameter includedUnknownGroup to false. XML <execution> <id>generate-markdown</id> <phase>process-classes</phase> <goals> <goal>generate-property-document</goal> </goals> <configuration> <type>MARKDOWN</type> <markdownCustomization> <includeUnknownGroup>false</includeUnknownGroup> </markdownCustomization> </configuration> </execution> Execute the markdown generation, and you will notice that the Unknown Group is not present anymore in the output. Description and Default Value In the above output, you notice that the description of the properties and the default value of stringProperty is empty. Create a new configuration class MySecondProperties. Add a Javadoc above the fields representing the properties and add a @DefaultValue annotation before stringProperty in the constructor. Java @Getter @ConfigurationProperties("my.second.properties") public class MySecondProperties { /** * This is the description for stringProperty */ private final String stringProperty; /** * This is the description for booleanProperty */ private final boolean booleanProperty; public MySecondProperties(@DefaultValue("default value for stringProperty") String stringProperty, boolean booleanProperty) { this.stringProperty = stringProperty; this.booleanProperty = booleanProperty; } } Generate the documentation and you will notice that the description is present and the default value is filled for stringProperty. This is quite awesome, isn’t it? The documentation is right there together with the code and the documentation in markdown is generated from it. Nested Properties Does it also work for nested properties? Let’s find out. Create a configuration class MyThirdProperties with a nested property nestedProperty which also contains a stringProperty and a booleanProperty. The booleanProperty is defaulted to true. Java @Getter @ConfigurationProperties("my.third.properties") public class MyThirdProperties { /** * This is the description for stringProperty */ private final String stringProperty; /** * This is the description for booleanProperty */ private final boolean booleanProperty; private final NestedProperty nestedProperty; public MyThirdProperties(@DefaultValue("default value for stringProperty") String stringProperty, boolean booleanProperty, @DefaultValue NestedProperty nestedProperty) { this.stringProperty = stringProperty; this.booleanProperty = booleanProperty; this.nestedProperty = nestedProperty; } @Getter public static class NestedProperty { /** * This is the description for nested stringProperty */ private final String stringProperty; /** * This is the description for nested booleanProperty */ private final boolean booleanProperty; public NestedProperty(@DefaultValue("default value for nested stringProperty") String stringProperty, @DefaultValue("true") boolean booleanProperty) { this.stringProperty = stringProperty; this.booleanProperty = booleanProperty; } } } Generate the Markdown documentation and you will notice that the documentation also contains the nested property. Records Since the configuration properties are immutable, it is even better to use Java records instead of using Lombok. Create a configuration class MyFourthProperties and use Java records. The question is where to add the description of the properties because there are no fields to which you can add Javadoc. Java /** * @param stringProperty This is the description for stringProperty * @param booleanProperty This is the description for booleanProperty */ @ConfigurationProperties("my.fourth.properties") public record MyFourthProperties (@DefaultValue("default value for stringProperty") String stringProperty, @DefaultValue("true") boolean booleanProperty) { } Generate the Markdown documentation and notice that the description is empty. This is not an issue with the plugin, however. The description is empty in the spring-configuration-metadata.json file and the plugin just uses this information. A question on Stack Overflow is asked about this. Hopefully, an answer will follow. Conclusion The Spring Configuration Property Documenter Maven plugin is a great initiative in order to keep documentation closer to the code and to generate it based on the code. It fills a gap in my opinion which benefits almost all Spring projects.
In production systems, new features sometimes need a data migration to be implemented. Such a migration can be done with different tools. For simple migrations, SQL can be used. It is fast and easily integrated into Liquibase or other tools to manage database migrations. This solution is for use cases that can not be done in SQL scripts. The Use Case The MovieManager project stores the keys to access TheMovieDB in the database. To improve the project, the keys should now be stored encrypted with Tink. The existing keys need to be encrypted during the data migration, and new keys need to be encrypted during the sign-in process. The movie import service needs to decrypt the keys to use them during the import. The Data Migration Update the Database Table To mark migrated rows in the "user1" table, a "migration" column is added in this Liquibase script: <changeSet id="41" author="angular2guy"> <addColumn tableName="user1"> <column defaultValue="0" type="bigint" name="migration"/> </addColumn> </changeSet> The changeSet adds the "migration" column to the "user1" table and sets the default value "0". Executing the Data Migration The data migration is started with the startMigration(...) method in the CronJobs class: ... private static volatile boolean migrationsDone = false; ... @Scheduled(initialDelay = 2000, fixedRate = 36000000) @SchedulerLock(name = "Migrations_scheduledTask", lockAtLeastFor = "PT2H", lockAtMostFor = "PT3H") public void startMigrations() { LOG.info("Start migrations."); if (!migrationsDone) { this.dataMigrationService.encryptUserKeys().thenApplyAsync(result -> { LOG.info("Users migrated: {}", result); return result; }); } migrationsDone = true; } The method startMigrations() is called with the @Scheduled annotation because that enables the use of @SchedulerLock. The @SchedulerLock annotation sets a database lock to limit the execution to one instance to enable horizontal scalability. The startMigrations() method is called 2 seconds after startup and then every hour with the @Scheduled annotation. The encryptUserKeys() method returns a CompletableFuture that enables the use of thenApplyAsync(...) to log the amount of migrated users nonblocking. The static variable migrationsDone makes sure that each application instance calls the dataMigrationService only once and makes the other calls essentially free. Migrating the Data To query the Users, the JpaUserRepository has the method findOpenMigrations: public interface JpaUserRepository extends CrudRepository<User, Long> { ... @Query("select u from User u where u.migration < :migrationId") List<User> findOpenMigrations(@Param(value = "migrationId") Long migrationId); } The method searches for entities where the migration property has not been increased to the migrationId that marks them as migrated. The DataMigrationService contains the encryptUserKeys() method to do the migration: @Service @Transactional(propagation = Propagation.REQUIRES_NEW) public class DataMigrationService { ... @Async public CompletableFuture<Long> encryptUserKeys() { List<User> migratedUsers = this.userRepository.findOpenMigrations(1L) .stream().map(myUser -> { myUser.setUuid(Optional.ofNullable(myUser.getUuid()) .filter(myStr -> !myStr.isBlank()) .orElse(UUID.randomUUID().toString())); myUser.setMoviedbkey(this.userDetailService .encrypt(myUser.getMoviedbkey(), myUser.getUuid())); myUser.setMigration(myUser.getMigration() + 1); return myUser; }).collect(Collectors.toList()); this.userRepository.saveAll(migratedUsers); return CompletableFuture.completedFuture( Integer.valueOf(migratedUsers.size()).longValue()); } } The service has the Propagation.REQUIRES_NEW in the annotation to make sure that each method gets wrapped in its own transaction. The encryptUserKeys() method has the Async annotation to avoid any timeouts on the calling side. The findOpenMigrations(...) method of the repository returns the not migrated entities and uses map for the migration. In the map it is first checked if the user's UUID is set, or if it is created and set. Then the encrypt(...) method of the UserDetailService is used to encrypt the user key, and the migration property is increased to show that the entity was migrated. The migrated entities are put in a list and saved with the repository. Then the result CompletableFuture is created to return the amount of migrations done. If the migrations are already done, findOpenMigrations(...) returns an empty collection and nothing is mapped or saved. The UserDetailServiceBase does the encryption in its encrypt() method: ... @Value("${tink.json.key}") private String tinkJsonKey; private DeterministicAead daead; ... @PostConstruct public void init() throws GeneralSecurityException { DeterministicAeadConfig.register(); KeysetHandle handle = TinkJsonProtoKeysetFormat.parseKeyset( this.tinkJsonKey, InsecureSecretKeyAccess.get()); this.daead = handle.getPrimitive(DeterministicAead.class); } ... public String encrypt(String movieDbKey, String uuid) { byte[] cipherBytes; try { cipherBytes = daead.encryptDeterministically( movieDbKey.getBytes(Charset.defaultCharset()), uuid.getBytes(Charset.defaultCharset())); } catch (GeneralSecurityException e) { throw new RuntimeException(e); } String cipherText = new String(Base64.getEncoder().encode(cipherBytes), Charset.defaultCharset()); return cipherText; } The tinkJsonKey is a secret, and must be injected as an environment variable or Helm chart value into the application for security reasons. The init() method is annotated with @PostConstruct to run as initialization, and it registers the config and creates the KeysetHandle with the tinkJsonKey. Then the primitive is initialized. The encrypt(...) method creates the cipherBytes with encryptDeterministcally(...) and the parameters of the method. The UUID is used to have unique cipherBytes for each user. The result is Base64 encoded and returned as String. Conclusion: Data Migration This migration needs to run as an application and not as a script. The trade-off is that the migration code is now in the application, and after the migration is run it, is dead code. That code should be removed then, but in the real world, the time to do this is limited and after some time it is forgotten. The alternative is to use something like Spring Batch, but doing that will take more effort and time because the JPA entities/repos can not be reused that easily. A TODO to clean up the method in the DataMigrationService should do the trick sooner or later. One operations constraint has to be considered: during migration, the database is in an inconsistent state and the user access to the applications should be stopped. Finally Using the Keys The MovieService contains the decrypt(...) method: @Value("${tink.json.key}") private String tinkJsonKey; private DeterministicAead daead; ... @PostConstruct public void init() throws GeneralSecurityException { DeterministicAeadConfig.register(); KeysetHandle handle = TinkJsonProtoKeysetFormat .parseKeyset(this.tinkJsonKey, InsecureSecretKeyAccess.get()); this.daead = handle.getPrimitive(DeterministicAead.class); } ... private String decrypt(String cipherText, String uuid) throws GeneralSecurityException { String result = new String(daead.decryptDeterministically( Base64.getDecoder().decode(cipherText), uuid.getBytes(Charset.defaultCharset()))); return result; } The properties and the init() method are the same as with the encryption. The decrypt(...) method first Base64 decodes the cipherText and then uses the result and the UUID to decrypt the key and return it as a String. That key string is used with the movieDbRestClient methods to import movie data into the database. Conclusion The Tink library makes using encryption easy enough. The tinkJsonKey has to be injected at runtime and should not be in a repo file or the application jar. A tinkJsonKey can be created with the EncryptionTest createKeySet(). The ShedLock library enables horizontal scalability, and Spring provides the toolbox that is used. The solution tries to balance the trade-offs for a horizontally scalable data migration that can not be done in a script.
Unused code adds time and burden to maintaining the codebase, and removing it is the only cure for this side of “more cowbell.” Unfortunately, it’s not always obvious whether developers can remove certain code without breaking the application. As the codebase becomes cluttered and unwieldy, development teams can become mired in mystery code that slows development and lowers morale. Do you remember the first time you walked into your garage, empty and sparkling, yawning with the promise of protecting your vehicles and power tools? How did it look the last time you walked in? If you’re like many of us, the clutter of long-closed boxes taunts you every time you walk around them, losing precious minutes before you can get to the objects you need while your car sits in the driveway. Sadly, development teams have a similar problem with their source code, which has grown into a cluttered mess. Over the last few months, I’ve been working on a way to help development teams maintain less code. Everything we normally read is about working with new frameworks, new tools, and new techniques — but one thing many of us ignore is improving velocity by simply getting rid of things we no longer need. Essentially, as it runs, the JVM streams off its first-call method invocation log to a central location to track "have we used this method recently." When the method appears in the code inventory, the answer is yes — if the method does not appear, then it becomes a candidate for removal of that unused code. Dead Code Removal If you’re a senior developer helping new teammates, consider the work it takes to onboard new members and for them to learn your codebase. Each time they change something, they scroll past methods. Although our IDEs and analyzers can identify fully dead code, the frustration point is code that looks alive but just isn’t used. Often, these are public methods or classes that just aren’t called or have commented/modified annotations. As I’ve talked to teams about the idea that we hoard unused code, I’ve heard comments like these: “I don’t know what this code does, so I don’t want to get rid of it, but I would love to.” "I could clean that up, but I have other priority issues and don’t have time for that." “We never prioritize clean up. We just do new features.” What if Java developers had an easier way to identify dead code for removal — a way where we could prioritize code cleanup during our sprints to reduce technical debt without taking time away from business needs to add features? Code removal is complex and generally takes a back seat to new features. Over time, code becomes unused as teams refactor without removal: commenting on an annotation, changing a path, or moving functionality. Most senior engineers would have to allocate time in their sprints to find what to remove: evaluating missing log statements or reviewing code with static analyzers. Both are problematic from a time perspective, so many teams just leave it in the code repository, active but dead: a problem for a future team lead or delayed until the next big rewrite. The JVM, however, has an overlooked capability to identify dead code and simplify the prioritization problem. By re-purposing the bytecode interpreter, the JVM can identify when a method is first called per execution. When tracked in a central location, these logs produce a treasure map you can follow to remove dead code. reducing the overall cognitive burden and improving team velocity. If a method hasn’t run in a year, you can probably remove it. Team leads can then take classes and methods that haven’t been executed and remove that code either at one time or throughout several sprints. Why remove unused code at all? For many groups, updating libraries and major Java versions requires touching a lot of code. Between Java 8 and Java 17, the XML libraries were deprecated and removed — as you port your application, do you still use all that XML processing? Instead of touching the code and all associated unit tests, what if you could get rid of that code and remove the test? If the code doesn’t run, team members shouldn’t spend hours changing the code and updating tests to pass: removing the dead code is faster and reduces the mental complexity of figuring that code out. Similar situations arise from updates to major frameworks like Spring, iText, and so on. Imagine you paid your neighbor’s kids to mow your lawn with your mower, and it was hidden behind a wall of boxes, expired batteries, old clothes, and old electronics. How hard do you think they would try to navigate around your junk before they gave up and went home? Senior engineers are doing the same thing. What should be an hour’s work of mowing becomes two hours. The problem of cluttered and unused code also affects teams working on decomposing a monolith or re-architecting for the cloud. Without a full measurement of what code is still used, teams end up breaking out huge microservices that are difficult to manage because they include many unnecessary pieces brought out of the monolith. Instead of producing the desired streamlined suite of microservices, these re-architecture projects take longer, cost more, and feel like they need to be rewritten right away because the clutter the team was trying to avoid was never removed. Difficulties stick with the project until teams can decrease the maintenance burden: removing unnecessary code is a rapid way to decrease that burden. Instead of clamoring for a rewrite, reduce the maintenance burden to tidy up what you have. The Benefits of Tracking Used/Unused Code The distinguishing benefit of tracking life vs. unused code from the JVM is that teams can gather data from production applications without impacting performance. The JVM knows when a method is first called, and logging it doesn’t add any measurable overhead. This way, teams that aren’t sure about the robustness of their test environments can rely on the result. A similar experience exists for projects that have had different levels of test-driven development over their lifetime. Changing a tiny amount of code could result in several hours of test refactoring to make tests pass and get that green bar. I’ve seen many projects where the unit tests were the only thing that used the code. Removing the code and the unnecessary tests was more satisfying than updating all the code to the newer library just to get a green bar. The best way of identifying unused code for removal is to passively track what code runs. Instead of figuring it out manually or taking time from sprints, tune your JVM to record the first invocation of each method. It’s like a map of your unused boxes next to your automatic garage door opener. Later on, during sprints or standard work, run a script to compare your code against the list to see what classes and methods never ran. While the team works to build new features and handle normal development, start removing code that never ran. Perform your standard tests – if tests fail, look into removing or changing the test as well because it was just testing unused code. By removing this unused code over time, teams will have less baggage, less clutter, and less mental complexity to sift through as they work on code. If you’ve been working on a project for a long time or just joined a team and your business is pressuring you to go faster, consider finally letting go of unnecessary code. Track Code Within the JVM The JVM provides plenty of capabilities that help development teams create fast-running applications. It already knows when a method will be first called, so unlike profilers, there’s no performance impact on tracking when this occurs. By consolidating this first-call information, teams can identify unused code and finally tidy up that ever-growing codebase.