Core Practices
In this Refcard, we outline specific best practices for setting up efficient, secure Jenkins pipelines.
Use Just Enough Pipeline
Jenkins Pipeline (or simply Pipeline with a capital “P”) is a suite of plugins that supports implementing and integrating continuous delivery pipelines into Jenkins. This allows you to automate the SDLC and deliver important changes more efficiently to your users and customers.
Pipeline code works beautifully for its intended role of automating build, test, deploy, and administration tasks. But, as it is pressed into more complex roles and unexpected uses, some users have run into snags. Using best practices — and avoiding common mistakes — can help you design a pipeline that is more robust, scalable, and high-performing.
Often, development teams make basic mistakes that can sabotage their pipeline. (Yes, you can sabotage yourself when you’re creating a pipeline.) In fact, it’s easy to spot someone who is going down this dangerous path — and it’s usually because they don't understand some key technical concepts about Pipeline. This invariably leads to scalability mistakes that you’ll pay dearly for down the line.
Just Say No to Pipelines in Programming Languages
Perhaps the biggest misstep people make is deciding that they need to write their entire pipeline in a programming language. After all, Pipeline is a domain specific language (DSL). However, that does not mean that it is a general-purpose programming language.
If you treat the DSL as a general-purpose programming language, you are making a serious architectural blunder by doing the wrong work in the wrong place. Remember that the core of Pipeline code runs on the controller. So, you should be mindful that everything you express in the Pipeline domain specific language (DSL) will compete with every other Jenkins job running on the controller.
For example, it’s easy to include a lot of conditionals, flow control logic, and requests using scripted syntax in the pipeline job. Experience tells us this is not a good idea and can result in serious damage to pipeline performance. Organizations with poorly written Pipeline jobs bring a controller to its knees, while only running a few concurrent builds.
Wait a minute, you might ask: “Isn’t the controller supposed to handle code?” Yes, the controller certainly is there to execute pipelines. But it's much better to assign individual steps of the pipeline to command-line calls that execute on an agent. So, instead of running a lot of conditionals inside the pipeline DSL, it’s better to put those conditionals inside a shell script or batch file and call that script from the pipeline.
However, this begs another question: “What if I don't have any agents connected to my controller?” If this is the case, then you've just made another bad mistake in scaling Jenkins pipelines. Why? Because the first rule of building an effective pipeline is to make sure you use agents. If you're using a Jenkins controller and haven’t defined any agents, then your first step should be to define at least one agent and use that agent instead of executing on the controller.
For the sake of maintaining scalability in your pipeline, the general rule is to avoid processing any workload on your controller. If you're running Jenkins jobs on the controller, you are sacrificing controller performance. So, try to avoid using Jenkins controller capacity for things that should be passed off to an agent. Then, as you grow and develop, all of your work should be running agents. This is why we always recommend setting the number of executors on the controller to zero.
Use Just Enough Pipeline to Keep Your Pipeline Scalable
All of this serves to highlight the overarching theme of “using just enough pipeline.” Simply put, you want to use enough code to connect the pipeline steps and integrate tools — but no more than that. Limit the amount of complex logic embedded in the Pipeline itself (similarly to a shell script), and avoid treating it as a general-purpose programming language. This makes the pipeline easier to maintain, protects against bugs, and reduces the load on controllers.
Another best practice for keeping your pipeline lean, fast, and scalable is to use declarative syntax instead of scripted syntax for your Pipeline. Declarative naturally leads you away from the kinds of mistakes just described. It is a simpler expression of code and an easier way to define your job. It's computed at the startup of the pipeline instead of executing continually during the pipeline.
Therefore, when creating a pipeline, start with declarative, and keep it simple for as long as possible. Anytime a script block shows up inside of a declarative pipeline, you should extract that block and put it in a shared library step. That way, the declarative pipeline is still clean. Joining together the declarative and the shared library will take care of the vast majority of use cases you’ll experience.
That being said, you cannot assume that declarative plus a shared library will solve every problem. There are cases where scripted is the right solution. However, declarative is a great starting point until you discover that you absolutely must use scripted.
Just remember, at the end of the day, you’ll do well to follow the adage: “Use just enough pipeline and no more.
Use Declarative Pipeline
Maintaining multiple configurations of larger, more complex pipelines is another pain point commonly experienced by enterprise Jenkins users. Declarative Pipelines provide a more modern, opinionated approach to building software. The resulting Jenkinsfile can then be committed to a Git repo in a “pipeline as code” fashion. It is the recommended way of building Pipelines at scale as it makes Pipeline easier to manage.
A declarative Pipeline provides a structured hierarchical syntax to simplify the creation of Pipelines and the associated Jenkinsfiles. In its simplest form, a Pipeline runs on an agent and contains stages, while each stage contains steps that define specific actions. Here is an example:
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'mvn install'
}
}
}
}
Let’s define a few terms before we go any further:
- Agent: An agent section specifies where tasks in a Pipeline run. An agent must be defined at the top level inside the pipeline block to define the default agent for all stages in the Pipeline. An agent section may optionally be specified at the stage level, overriding the default for this stage and any child stages. In this example, the agent is specified with any, meaning this Pipeline will run on any available agent.
- Stages: A stage represents a logical grouping of tasks in the Pipeline. Each stage may contain a steps section with steps to be executed, or stages to be executed sequentially, in parallel, or expanded into a parallel matrix.
It contains commands that run processes like Build, Deploy, or Tests. These commands are included in the steps section in the stage. It is advisable to have descriptive names for a stage as these names are displayed in the UI and logs.
There can be one or more stage sections inside a stages section. At least one stage must be defined in the stages section.
- Steps: The `steps`v section contains a set of commands. Each command performs a specific action and is executed one-by-one:
pipeline {
agent any
stages {
stage('Example') {
steps {
sh 'mvn compile'
}
}
}
}
Avoid Creating a Monolithic Controller
The number one question Jenkins administrators have when broaching this subject is: “How many jobs can I run on my controller?” Unfortunately, there is not a simple answer. The number of variables is too great. The pipeline complexity variable alone is enough to sway a guestimate into too many unknowns. We do, however, recommend using the 5,000 jobs figure as a high-water mark to help you understand when it is time to horizontally scale. Employing a monitoring solution to monitor macro metrics (CPU/RAM/DISK I/O) and creating baselines will allow you to properly plan for scaling. Micro-metrics such as garbage collection logs, object creation rate, and thread counts can be analyzed and baselined so that you understand the needs of your jobs and properly plan for additional controllers as needed in your installation.
Don’t Let Your Plugins Manage You
The large ecosystem of plugins that extend the functionality of Jenkins is a critical factor in its popularity and success. Despite all of the benefits that plugins bring to Jenkins, they also bring with them a host of problems. The management of plugins within a given Jenkins controller — and especially across a fleet of Jenkins controllers — has security, stability, and other implications that are compounded at scale.
Consider Your Plugin Management Strategy
Jenkins administrators can go about managing their plugins in many different ways. To determine the best plugin management strategy for your organization, consider the following questions:
- What is your operational model? For example, do you…
- Have centralized management for all plugin-related changes on all controllers?
- Default to one-time plugin deployment for all controllers, then self-managed afterwards?
- Allow team/project administrators of each controller to manage and install plugins?
- Will controllers be restricted on the list of plugins available to them?
- Do you prefer the definition of plugin list to be in code or via UI?
- Is the installation of plugins per controller optional or mandatory?
- What are the network restrictions and/or corporate policy restrictions on plugins?
- Do you have any internally developed plugins?
- What type of controllers will be provisioned?
The comparison chart below demonstrates plugin management methods available natively in Jenkins.
Native Plugin Management Methods for Jenkins |
|
Option |
Description |
Manage Plugins |
|
Commands With Jenkins CLI |
|
Jenkins Configuration as Code (JCasC) for Controllers |
|
For smaller organizations, the above methods may cover their plugin management needs. Larger organizations, particularly those in highly regulated industries such as banking, healthcare, or insurance, require more robust team management capabilities that tie-in to their RBAC strategy. They are also more likely to require ongoing performance and security testing of plugins used. This might entail testing plugins, plugin versions, and plugin dependencies to determine their stability. For those organizations, more advanced plugin management methods are available by using a commercial, Jenkins-based CI solution.
As software organizations grow, so do the number of plugins that are required to support Jenkins pipelines and application development teams. A Jenkins administrator can end up with hundreds of plugins to manage. Without visibility into which plugins are being used and not used, unused plugins are oftentimes left installed to prevent disruption to the entire system. Over time, this leads to plugin bloat, which can negatively impact system stability and whether or not upgrades are performed.
Use Containers for Build Agents
A continuous integration environment is a mixed bag of machines, platforms, build toolchains, and operating systems. You need the utmost flexibility to manage these machines and build them to be interchangeable. In general, you don’t want to tie builds to a specific build machine.
Make use of container images and Dockerfiles. You won’t have to worry about configuring and managing tool installers or setting up build agent images. With Declarative Pipelines, you can specify that the build runs in a specific container image, or you can have your build environment expressed as a Dockerfile that is stored in source control. You can also use the same behavior at a per-stage level rather than across the entire pipeline.
Developers Can Have Complete Control Over Their Build Environment
If you need a new version of a tool, update the Dockerfile in source control as part of the pull request to upgrade the tool. If you need to build an old branch, the old branch will have the old build environment.
What if I’m Using Windows and Linux Containers?
If you are working in an environment with a mixture of Windows and Linux containers, then you will probably want to use labels to differentiate the build agents configured to run Windows containers from the agents configured to run Linux containers.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}
{{ parent.urlSource.name }}