Local Continuous Delivery Environment with Docker and Jenkins

In this article I’m going to show you how to setup continuous delivery environment for building Docker images of our Java applications on the local machine. Our environment will consists of Gitlab (optional, otherwise you can use hosted GitHub), Jenkins master, Jenkins JNLP slave with Docker, and private Docker registry. All those tools will be run locally using their Docker images. Thanks to that you will be able to easily test it on your laptop, and then configure the same environment on production deployed on multiple servers or VMs. Let’s take a look on the architecture of the proposed solution.

art-docker-1

1. Running Jenkins Master

We use the latest Jenkins LTS image. Jenkins Web Dashboard is exposed on port 38080. Slave agents may connect master on default 50000 JNLP (Java Web Start) port.

$ docker run -d --name jenkins -p 38080:8080 -p 50000:50000 jenkins/jenkins:lts

After starting, you have to execute command docker logs jenkins in order to obtain an initial admin password. Find the following fragment in the logs, copy your generated password and paste in Jenkins start page available at http://192.168.99.100:38080.

art-docker-2

We have to install some Jenkins plugins to be able to checkout project from Git repository, build application from source code using Maven, and finally build and push Docker image to a private registry. Here’s a list of required plugins:

  • Git Plugin – this plugin allows to use Git as a build SCM
  • Maven Integration Plugin – this plugin provides advanced integration for Maven 2/3
  • Pipeline Plugin – this is a suite of plugins that allows you to create continuous delivery pipelines as a code, and run them in Jenkins
  • Docker Pipeline Plugin – this plugin allows you to build and use Docker containers from pipelines

2. Building Jenkins Slave

Pipelines are usually run on different machine than machine with master node. Moreover, we need to have Docker engine installed on that slave machine to be able to build Docker images. Although, there are some ready Docker images with Docker-in-Docker and Jenkins client agent, I have never find the image with JDK, Maven, Git and Docker installed. This is most commonly used tools when building images for your microservices, so it is definitely worth to have such an image with Jenkins image prepared.

Here’s the Dockerfile with Jenkins Docker-in-Docker slave with Git, Maven and OpenJDK installed. I used Docker-in-Docker as a base image (1). We can override some properties when running our container. You will probably have to override default Jenkins master address (2) and slave secret key (3). The rest of parameters is optional, but you can even decide to use external Docker daemon by overriding DOCKER_HOST environment variable. We also download and install Maven (4) and create user with special sudo rights for running Docker (5). Finally we run entrypoint.sh script, which starts Docker daemon and Jenkins agent (6).

FROM docker:18-dind # (1)
MAINTAINER Piotr Minkowski
ENV JENKINS_MASTER http://localhost:8080 # (2)
ENV JENKINS_SLAVE_NAME dind-node
ENV JENKINS_SLAVE_SECRET "" # (3)
ENV JENKINS_HOME /home/jenkins
ENV JENKINS_REMOTING_VERSION 3.17
ENV DOCKER_HOST tcp://0.0.0.0:2375
RUN apk --update add curl tar git bash openjdk8 sudo

ARG MAVEN_VERSION=3.5.2 # (4)
ARG USER_HOME_DIR="/root"
ARG SHA=707b1f6e390a65bde4af4cdaf2a24d45fc19a6ded00fff02e91626e3e42ceaff
ARG BASE_URL=https://apache.osuosl.org/maven/maven-3/${MAVEN_VERSION}/binaries

RUN mkdir -p /usr/share/maven /usr/share/maven/ref \
  && curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \
  && echo "${SHA}  /tmp/apache-maven.tar.gz" | sha256sum -c - \
  && tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \
  && rm -f /tmp/apache-maven.tar.gz \
  && ln -s /usr/share/maven/bin/mvn /usr/bin/mvn

ENV MAVEN_HOME /usr/share/maven
ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2"
# (5)
RUN adduser -D -h $JENKINS_HOME -s /bin/sh jenkins jenkins && chmod a+rwx $JENKINS_HOME
RUN echo "jenkins ALL=(ALL) NOPASSWD: /usr/local/bin/dockerd" > /etc/sudoers.d/00jenkins && chmod 440 /etc/sudoers.d/00jenkins
RUN echo "jenkins ALL=(ALL) NOPASSWD: /usr/local/bin/docker" > /etc/sudoers.d/01jenkins && chmod 440 /etc/sudoers.d/01jenkins
RUN curl --create-dirs -sSLo /usr/share/jenkins/slave.jar http://repo.jenkins-ci.org/public/org/jenkins-ci/main/remoting/$JENKINS_REMOTING_VERSION/remoting-$JENKINS_REMOTING_VERSION.jar && chmod 755 /usr/share/jenkins && chmod 644 /usr/share/jenkins/slave.jar

COPY entrypoint.sh /usr/local/bin/entrypoint
VOLUME $JENKINS_HOME
WORKDIR $JENKINS_HOME
USER jenkins
ENTRYPOINT ["/usr/local/bin/entrypoint"] # (6)

Here’s the script entrypoint.sh.

#!/bin/sh
set -e
echo "starting dockerd..."
sudo dockerd --host=unix:///var/run/docker.sock --host=$DOCKER_HOST --storage-driver=vfs &
echo "starting jnlp slave..."
exec java -jar /usr/share/jenkins/slave.jar \
	-jnlpUrl $JENKINS_URL/computer/$JENKINS_SLAVE_NAME/slave-agent.jnlp \
	-secret $JENKINS_SLAVE_SECRET

The source code with image definition is available on GitHub. You can clone the repository https://github.com/piomin/jenkins-slave-dind-jnlp.git, build image and then start container using the following commands.

$ docker build -t piomin/jenkins-slave-dind-jnlp .
$ docker run --privileged -d --name slave -e JENKINS_SLAVE_SECRET=5664fe146104b89a1d2c78920fd9c5eebac3bd7344432e0668e366e2d3432d3e -e JENKINS_SLAVE_NAME=dind-node-1 -e JENKINS_URL=http://192.168.99.100:38080 piomin/jenkins-slave-dind-jnlp

Building it is just an optional step, because image is already available on my Docker Hub account.

art-docker-3

3. Enabling Docker-in-Docker Slave

To add new slave node you need to navigate to section Manage Jenkins -> Manage Nodes -> New Node. Then define permanent node with name parameter filled. The most suitable name is default name declared inside Docker image definition – dind-node. You also have to set remote root directory, which should be equal to path defined inside container for JENKINS_HOME environment variable. In my case it is /home/jenkins. The slave node should be launched via Java Web Start (JNLP).

art-docker-4

New node is visible on the list of nodes as disabled. You should click in order to obtain its secret key.

art-docker-5

Finally, you may run your slave container using the following command containing secret copied from node’s panel in Jenkins Web Dashboard.

$ docker run --privileged -d --name slave -e JENKINS_SLAVE_SECRET=fd14247b44bb9e03e11b7541e34a177bdcfd7b10783fa451d2169c90eb46693d -e JENKINS_URL=http://192.168.99.100:38080 piomin/jenkins-slave-dind-jnlp

If everything went according to plan you should see enabled node dind-node in the node’s list.

art-docker-6

4. Setting up Docker Private Registry

After deploying Jenkins master and slave, there is the last required element in architecture that has to be launched – private Docker registry. Because we will access it remotely (from Docker-in-Docker container) we have to configure secure TLS/SSL connection. To achieve it we should first generate TLS certificate and key. We can use openssl tool for it. We begin from generating a private key.

$ openssl genrsa -des3 -out registry.key 1024

Then, we should generate a certificate request file (CSR) by executing the following command.

$ openssl req -new -key registry.key -out registry.csr

Finally, we can generate a self-signed SSL certificate that is valid for 1 year using openssl command as shown below.

$ openssl x509 -req -days 365 -in registry.csr -signkey registry.key -out registry.crt

Don’t forget to remove passphrase from your private key.

$ openssl rsa -in registry.key -out registry-nopass.key -passin pass:123456

You should copy generated .key and .crt files to your docker machine. After that you may run Docker registry using the following command.

docker run -d -p 5000:5000 --restart=always --name registry -v /home/docker:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.crt -e REGISTRY_HTTP_TLS_KEY=/certs/registry-nopass.key registry:2

If a registry has been successfully started you should able to access it over HTTPS by calling address https://192.168.99.100:5000/v2/_catalog from your web browser.

5. Creating application Dockerfile

The sample applications source code is available on GitHub in repository sample-spring-microservices-new (https://github.com/piomin/sample-spring-microservices-new.git). There are some modules with microservices. Each of them has Dockerfile created in the root directory. Here’s typical Dockerfile for our microservice built on top of Spring Boot.

FROM openjdk:8-jre-alpine
ENV APP_FILE employee-service-1.0-SNAPSHOT.jar
ENV APP_HOME /app
EXPOSE 8090
COPY target/$APP_FILE $APP_HOME/
WORKDIR $VERTICLE_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

6. Building pipeline through Jenkinsfile

This step is the most important phase of our exercise. We will prepare pipeline definition, which combines together all the currently discussed tools and solutions. This pipeline definition is a part of every sample application source code. The change in Jenkinsfile is treated the same as a change in the source code responsible for implementing business logic.
Every pipeline is divided into stages. Every stage defines a subset of tasks performed through the entire pipeline. We can select the node, which is responsible for executing pipeline’s steps or leave it empty to allow random selection of the node. Because we have already prepared dedicated node with Docker, we force pipeline to being built by that node. In the first stage called Checkout we pull the source code from Git repository (1). Then we build an application binary using Maven command (2). Once the fat JAR file has been prepared we may proceed to building application’s Docker image (3). We use methods provided by Docker Pipeline Plugin. Finally, we push the Docker image with fat JAR file to secure private Docker registry (4). Such an image may be accessed by any machine that has Docker installed and has access to our Docker registry. Here’s the full code of Jenkinsfile prepared for module config-service.

node('dind-node') {
    stage('Checkout') { # (1)
      git url: 'https://github.com/piomin/sample-spring-microservices-new.git', credentialsId: 'piomin-github', branch: 'master'
    }
    stage('Build') { # (2)
      dir('config-service') {
        sh 'mvn clean install'
        def pom = readMavenPom file:'pom.xml'
        print pom.version
        env.version = pom.version
        currentBuild.description = "Release: ${env.version}"
      }
    }
    stage('Image') {
      dir ('config-service') {
        docker.withRegistry('https://192.168.99.100:5000') {
          def app = docker.build "piomin/config-service:${env.version}" # (3)
          app.push() # (4)
        }
      }
    }
}

7. Creating Pipeline in Jenkins Web Dashboard

After preparing application’s source code, Dockerfile and Jenkinsfile the only thing left is to create pipeline using Jenkins UI. We need to select New Item -> Pipeline and type the name of our first Jenkins pipeline. Then go to Configure panel and select Pipeline script from SCM in Pipeline section. Inside the following form we should fill an address of Git repository, user credentials and a location of Jenkinsfile.

art-docker-7

8. Configure GitLab WebHook (Optionally)

If you run GitLab locally using its Docker image you will be able to configure webhook, which triggers run of your pipeline after pushing changes to Git repository. To run GitLab using Docker execute the following command.

$ docker run -d --name gitlab -p 10443:443 -p 10080:80 -p 10022:22
gitlab/gitlab-ce:latest

Before configuring webhook in GitLab Dashboard we need to enable this feature for Jenkins pipeline. To achieve it we should first install GitLab Plugin.

art-docker-8

Then, you should come back to the pipeline’s configuration panel and enable GitLab build trigger. After that, webhook will be available for our sample pipeline called config-service-pipeline under URL http://192.168.99.100:38080/project/config-service-pipeline as shown in the following picture.

art-docker-9

Before proceeding to configuration of webhook in GitLab Dashboard you should retrieve your Jenkins user API token. To achieve it go to profile panel, select Configure and click button Show API Token.

art-docker-10

To add a new WebHook for your Git repository, you need to go to the section Settings -> Integrations and then fill the URL field with webhook address copied from Jenkins pipeline. Then paste Jenkins user API token into field Secret Token. Leave the Push events checkbox selected.

art-docker-11

9. Running pipeline

Now, we may finally run our pipeline. If you use GitLab Docker container as Git repository platform you just have to push changes in the source code. Otherwise you have to manually start build of pipeline. The first build will take a few minutes, because Maven has to download dependencies required for building an application. If everything will end with success you should see the following result on your pipeline dashboard.

art-docker-13

You can check out the list of images stored in your private Docker registry by calling the following HTTP API endpoint in your web browser: https://192.168.99.100:5000/v2/_catalog.

art-docker-12

Advertisements

Quick guide to deploying Java apps on OpenShift

In this article I’m going to show you how to deploy your applications on OpenShift (Minishift), connect them with other services exposed there or use some other interesting deployment features provided by OpenShift. Openshift is built on top of Docker containers and the Kubernetes container cluster orchestrator. Currently, it is the most popular enterprise platform basing on those two technologies, so it is definitely worth examining it in more details.

1. Running Minishift

We use Minishift to run a single-node OpenShift cluster on the local machine. The only prerequirement before installing MiniShift is the necessity to have a virtualization tool installed. I use Oracle VirtualBox as a hypervisor, so I should set --vm-driver parameter to virtualbox in my running command.

$  minishift start --vm-driver=virtualbox --memory=3G

2. Running Docker

It turns out that you can easily reuse the Docker daemon managed by Minishift, in order to be able to run Docker commands directly from your command line, without any additional installations. To achieve this just run the following command after starting Minishift.

@FOR /f "tokens=* delims=^L" %i IN ('minishift docker-env') DO @call %i

3. Running OpenShift CLI

The last tool, that is required before starting any practical exercise with Minishift is CLI. CLI is available under command oc. To enable it on your command-line run the following commands.

$ minishift oc-env
$ SET PATH=C:\Users\minkowp\.minishift\cache\oc\v3.9.0\windows;%PATH%
$ REM @FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i

Alternatively you can use OpenShift web console which is available under port 8443. On my Windows machine it is by default launched under address 192.168.99.100.

4. Building Docker images of the sample applications

I prepared the two sample applications that are used for the purposes of presenting OpenShift deployment process. These are simple Java, Vert.x applications that provide HTTP API and store data in MongoDB. However, a technology is not very important now. We need to build Docker images with these applications. The source code is available on GitHub (https://github.com/piomin/sample-vertx-kubernetes.git) in branch openshift (https://github.com/piomin/sample-vertx-kubernetes/tree/openshift). Here’s sample Dockerfile for account-vertx-service.

FROM openjdk:8-jre-alpine
ENV VERTICLE_FILE account-vertx-service-1.0-SNAPSHOT.jar
ENV VERTICLE_HOME /usr/verticles
ENV DATABASE_USER mongo
ENV DATABASE_PASSWORD mongo
ENV DATABASE_NAME db
EXPOSE 8095
COPY target/$VERTICLE_FILE $VERTICLE_HOME/
WORKDIR $VERTICLE_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $VERTICLE_FILE"]

Go to account-vertx-service directory and run the following command to build image from a Dockerfile visible above.

$ docker build -t piomin/account-vertx-service .

The same step should be performed for customer-vertx-service. After it you have two images built, both in the same version latest, which now can be deployed and ran on Minishift.

5. Preparing OpenShift deployment descriptor

When working with OpenShift, the first step of application’s deployment is to create YAML configuration file. This file contains basic information about deployment like containers used for running applications (1), scaling (2), triggers that drive automated deployments in response to events (3) or a strategy of deploying your pods on the platform (4).

Deployment configurations can be managed with the oc command like any other resource. You can create new configuration or update the existing one by using oc apply command.

$ oc apply -f account-deployment.yaml

You can be surprised a little, but this command does not trigger any build and does not start the pods. In fact, you have only created a resource of type deploymentConfig, which may be describes deployment process. You can start this process using some other oc commands, but first let’s take a closer look on the resources required by our application.

6. Injecting environment variables

As I have mentioned before, our sample applications uses external datasource. They need to open the connection to the existing MongoDB instance in order to store there data passed using HTTP endpoints exposed by the application. Here’s MongoVerticle class, which is responsible for establishing client connection with MongoDB. It uses environment variables for setting security credentials and database name.

public class MongoVerticle extends AbstractVerticle {

	@Override
	public void start() throws Exception {
		ConfigStoreOptions envStore = new ConfigStoreOptions()
				.setType("env")
				.setConfig(new JsonObject().put("keys", new JsonArray().add("DATABASE_USER").add("DATABASE_PASSWORD").add("DATABASE_NAME")));
		ConfigRetrieverOptions options = new ConfigRetrieverOptions().addStore(envStore);
		ConfigRetriever retriever = ConfigRetriever.create(vertx, options);
		retriever.getConfig(r -> {
			String user = r.result().getString("DATABASE_USER");
			String password = r.result().getString("DATABASE_PASSWORD");
			String db = r.result().getString("DATABASE_NAME");
			JsonObject config = new JsonObject();
			config.put("connection_string", "mongodb://" + user + ":" + password + "@mongodb/" + db);
			final MongoClient client = MongoClient.createShared(vertx, config);
			final AccountRepository service = new AccountRepositoryImpl(client);
			ProxyHelper.registerService(AccountRepository.class, vertx, service, "account-service");
		});
	}

}

MongoDB is available in the OpenShift’s catalog of predefined Docker images. You can easily deploy it on your Minishift instance just by clicking “MongoDB” icon in “Catalog” tab. Username and password will be automatically generated if you do not provide them during deployment setup. All the properties are available as deployment’s environment variables and are stored as secrets/mongodb, where mongodb is the name of the deployment.

openshift-1

Environment variables can be easily injected into any other deployment using oc set command, and therefore they are injected into the pod after performing deployment process. The following command inject all secrets assigned to mongodb deployment to the configuration of our sample application’s deployment.

$ oc set env --from=secrets/mongodb dc/account-service

7. Importing Docker images to OpenShift

A deployment configuration is ready. So, in theory we could have start deployment process. However, we have back for a moment to the deployment config defined in the Step 5. We defined there two triggers that causes a new replication controller to be created, what results in deploying new version of pod. First of them is a configuration change trigger that fires whenever changes are detected in the pod template of the deployment configuration (ConfigChange). The second of them, image change trigger (ImageChange) fires when a new version of the Docker image is pushed to the repository. To be able to watch if an image in repository has been changed, we have to define and create image stream. Such an image stream does not contain any image data, but present a single virtual view of related images, something similar to an image repository. Inside deployment config file we referred to image stream account-vertx-service, so the same name should be provided inside image stream definition. In turn, when setting the spec.dockerImageRepository field we define the Docker pull specification for the image.

Finally, we can create resource on OpenShift platform.

$ oc apply -f account-image.yaml

8. Running deployment

Once a deployment configuration has been prepared, and Docker images has been succesfully imported into repository managed by OpenShift instance, we may trigger the build using the following oc command.

$ oc rollout latest dc/account-service
$ oc rollout latest dc/customer-service

If everything goes fine the new pods should be started for the defined deployments. You can easily check it out using OpenShift web console.

9. Updating image stream

We have already created two image streams related to the Docker repositories. Here’s the screen from OpenShift web console that shows the list of available image streams.

openshift-images

To be able to push a new version of an image to OpenShift internal Docker registry we should first perform docker login against this registry using user’s authentication token. To obtain the token from OpenShift use oc whoami command, and then pass it to your docker login command with -p parameter.

$ oc whoami -t
Sz9_TXJQ2nyl4fYogR6freb3b0DGlJ133DVZx7-vMFM
$ docker login -u developer -p Sz9_TXJQ2nyl4fYogR6freb3b0DGlJ133DVZx7-vMFM https://172.30.1.1:5000

Now, if you perform any change in your application and rebuild your Docker image with latest tag, you have to push that image to image stream on OpenShift. The address of internal registry has been automatically generated by OpenShift, and you can check it out in the image stream’s details. For me, it is 172.30.1.1:5000.

$ docker tag piomin/account-vertx-service 172.30.1.1:5000/sample-deployment/account-vertx-service:latest
$ docker push 172.30.1.1:5000/sample-deployment/account-vertx-service

After pushing new version of Docker image to image stream, a rollout of application is started automatically. Here’s the screen from OpenShift web console that shows the history of account-service application deployments.

openshift-2

Conclusion

I have shown you the further steps of deploying your application on the OpenShift platform. Basing on sample Java application that connects to a database, I illustrated how to inject credentials to that application’s pod entirely transparently for a developer. I also perform an update of application’s Docker image, in order to show how to trigger a new version deployment on image change.

openshift-3

Code Quality with SonarQube

Source code quality analysis is an essential part of the Continuous Integration process. Together with automated tests it is the key element to deliver reliable software without many bugs, security vulnerabilities or performance leaks. Probably the best static code analyzer you can find on the market is SonarQube. It has a support for more than 20 programming languages. It can be easily integrated with the most popular Continuous Integration engines like Jenkins or TeamCity. Finally, it has many features and plugins which can be easily managed from extensive web dashboard.

However, before we proceed to discuss about the most powerful capabilities of this solution it is well worth to ask Why we do it? Would it be productive for us to force developers to focus on code quality? Probably most of us are programmers and we exactly know that everyone else expect from us to deliver code which meet business demands rather than looks nice 🙂 After all do we really want to break the build by not fulfilling not important rule like maximum line length – rather a little pleasure. On the other hand taking over source code from someone else who was not paying attention to any of good programming practice is also not welcome if you know what I mean. But be calm, SonarQube is the right solution for you. In this article I’ll to show you that carrying about high code quality can be a good fun and above all you can learn more how to develop better code, while other team members spend time on fixing their bugs 🙂

Enough talk go to action. I suggest you to run your test instance of SonarQube using Docker. Here’s SonarQube run command. Then you can login to web dashboard available under http://192.168.99.100:9000 with admin/admin credentials.

docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube

You are signed in to the web dashboard but there are no projects created yet. To perform source code scanning you should just run one command mvn sonar:sonar if you are using maven in the building process. Don’t forget to add SonarQube server address in settings.xml file as you on the fragment below.

<profile>
	<id>sonar</id>
	<activation>
		<activeByDefault>true</activeByDefault>
	</activation>
	<properties>
		<sonar.host.url>http://192.168.99.100:9000</sonar.host.url>
	</properties>
</profile>

When SonarQube analyse finishes you will see new project with the same name as maven artifact name with your code metrics and statistics. I created sample Spring Boot application where I tried to perform some most popular mistakes which impact on code quality. Source code is available on GitHub. The right module for analyse is named person-service. However, the code with many bugs and vulnerabilities is pushed to v0.1 branch. Master branch has a latest version with the corrections performed basing on SonarQube analyse what I’m going to describe on the next section of that article. Ok, let’s start analyse with mvn command. We can be surprised a little – the code analyse result for 0.1 version is rather not satisfying. Although I spend much time on making important mistakes SonarQube reported only some bugs and code smells were detected and quality gate status is ‘Passed’.

sonar-1

Let’s take a closer look on quality gates in SonarQube. Like I mentioned before we would not like to break the build by not fulfiling one or group of not very important rules. We can achieve it by creating quality gate. This is a set of requirements that tells us whether or not going to deployment with new version od project. There is default quality gate for Java but we can change its thresholds or create the new one. The default quality gate has thresholds set only for new code, so I decided to create the one for my sample application minimum test coverage set on 50 percent, unit test success detection and ratings basic on full code. Now, scanning result looks a little different 🙂

sonar-3sonar-2

To enable scanning test coverage in SonarQube we should add jacoco plugin to maven pom.xml. During maven build mvn clean test -Dmaven.test.failure.ignore=true sonar:sonar the report would be automatically generated and uploaded to SonarQube.

<plugin>
	<groupId>org.jacoco</groupId>
	<artifactId>jacoco-maven-plugin</artifactId>
	<version>0.7.9</version>
	<executions>
		<execution>
			<id>default-prepare-agent</id>
			<goals>
				<goal>prepare-agent</goal>
			</goals>
		</execution>
		<execution>
			<id>default-report</id>
			<phase>prepare-package</phase>
			<goals>
				<goal>report</goal>
			</goals>
		</execution>
	</executions>
</plugin>

The last change that has to be done before application rescan is to installing some plugins and enabling rules disabled by default. The list of all active and inactive rules can be displayed in Quality Profiles section. In the default profile for Java the are more than 400 rules available and 271 active on start. I suggest you install FindBugs and Checkstyle plugins. Those plugins has many additional rules for Java which can be activated for our profile. Now there are about 1.1k inactive rules in many categories. Which of them should be activated depends on you, you can activate them in the default profile, create your new profile or use one of predefined profile, which were automatically created by plugins we installed before.  In my opinion the best way to select right rules is to create simple project and check which rules are suitable for you. Then you can check out the detailed description and disable the rule if needed. After activating some rules provided by Checkstyle plugin I have a report with 5 bugs and 77 code smells. The most important errors are visible in the pictures below.

sonar-5sonar-6sonar-7

All issues reported by SonarQube can be easily reviewed using UI dashboard for each project in the Issue tab. We can also install plugin SonarLint which integrates with most popular IDEs like Eclipse or IntelliJ and all those issue will be displayed there. Now, we can proceed to fix errors. All changes which I performed to resolve issues can be display on GitHub repository from branches v0.1 to v0.6. I resolved all problems except some checked exception warnings which I set to Resolved (Won’t fix). Those issues won’t be reported after next scans.

sonar-7

Finally my project looks as you could see in the picture below. All ratings have a score ‘A’, test coverage is greater that 60% and quality gate is ‘Passed’. Final person-service version is commited into master branch.

sonar-8

Like you see there are many rules which can be applied to your project during SonarQube scanning, but sometimes it would be not enough for your organization needs. In that case you may search for some additional plugins or create your own plugin with the rules that meet your specific requirements. In my sample available on GitHub there is module sonar-rules where I defined the rule checking whether all public classes have javadoc comments including @author field. To create SonarQube plugin add the following fragment to your pom.xml and change packaging type to sonar-plugin.

<plugin>
	<groupId>org.sonarsource.sonar-packaging-maven-plugin</groupId>
	<artifactId>sonar-packaging-maven-plugin</artifactId>
	<version>1.17</version>
	<extensions>true</extensions>
	<configuration>
		<pluginKey>piotjavacustom</pluginKey>
		<pluginName>PiotrCustomRules</pluginName>
		<pluginDescription>For test purposes</pluginDescription>
		<pluginClass>pl.piomin.sonar.plugin.CustomRulesPlugin</pluginClass>
		<sonarLintSupported>true</sonarLintSupported>
		<sonarQubeMinVersion>6.0</sonarQubeMinVersion>
	</configuration>
</plugin>

Here’s the class with custom rule definition. First we have to get a scanned class node (Kind.CLASS), a then process first comment (Kind.TRIVIA) in the class file. The rule parameters like name or priority are set inside @Role annotation.

@Rule(key = "CustomAuthorCommentCheck",
		name = "Javadoc comment should have @author name",
		description = "Javadoc comment should have @author name",
		priority = Priority.MAJOR,
		tags = {"style"})
public class CustomAuthorCommentCheck extends IssuableSubscriptionVisitor {

	private static final String MSG_NO_COMMENT = "There is no comment under class";
	private static final String MSG_NO_AUTHOR = "There is no author inside comment";

	private Tree actualTree = null;

	@Override
	public List<Kind> nodesToVisit() {
		return ImmutableList.of(Kind.TRIVIA, Kind.CLASS);
	}

	@Override
	public void visitTrivia(SyntaxTrivia syntaxTrivia) {
		String comment = syntaxTrivia.comment();
		if (syntaxTrivia.column() != 0)
			return;
		if (comment == null) {
			reportIssue(actualTree, MSG_NO_COMMENT);
			return;
		}
		if (!comment.contains("@author")) {
			reportIssue(actualTree, MSG_NO_AUTHOR);
			return;
		}
	}

	@Override
	public void visitNode(Tree tree) {
		if (tree.is(Kind.CLASS)) {
			actualTree = tree;
		}
	}

}

Before building and deploying plugin into SonarQube server it can be easily tested using junit. Inside the src/test/file directory we should place test data – java files which are scanned during junit test. For failure test we should also create file CustomAuthorCommentCheck_java.json in the /org/sonar/l10n/java/rules/squid/ directory with rule definition.

@Test
public void testOk() {
	JavaCheckVerifier.verifyNoIssue("src/test/files/CustomAuthorCommentCheck.java", new CustomAuthorCommentCheck());
}

@Test
public void testFail() {
	JavaCheckVerifier.verify("src/test/files/CustomAuthorCommentCheckFail.java", new CustomAuthorCommentCheck());
}

Finally, build maven project and copy generated JAR artifact from target directory to SonarQube docker container into $SONAR_HOME/extensions/plugins directory. Then restart your docker container.

docker cp target/sonar-plugins-1.0-SNAPSHOT sonarqube:/opt/sonarqube/extensions/plugins

After SonarQube restart your plugin’s rules are visible under Rules section.

sonar-4

The last thing to do is to run SonarQube scanning in the Continuous Integration process. SonarQube can be easily integrated with the most popular CI server – Jenkins. Here’s the fragment of Jenkins pipeline where we perform source code scanning and then waiting for quality gate result. If you interested in more details about Jenkins pipelines, Continuous Integration and Delivery read my previous post How to setup Continuous Delivery environment.

stage('SonarQube analysis') {
	withSonarQubeEnv('My SonarQube Server') {
		sh 'mvn clean package sonar:sonar'
	}
}
stage("Quality Gate") {
	timeout(time: 1, unit: 'HOURS') {
		def qg = waitForQualityGate()
		if (qg.status != 'OK') {
			error "Pipeline aborted due to quality gate failure: ${qg.status}"
		}
	}
}

Microservices Continuous Delivery with Docker and Jenkins

Docker, Microservices, Continuous Delivery are currently some of the most popular topics in the world of programming. In an environment consisting of dozens of microservices communicating with each other it seems to be particularly important the automation of the testing, building and deployment process. Docker is excellent solution for microservices, because it can create and run isolated containers with service. Today, I’m going to present you how to create basic continuous delivery pipeline for sample microservices using most popular software automation tool – Jenkins.

Sample Microservices

Before I get into the main topic of the article I say a few words about structure and tools used for sample microservices creation. Sample application consists of two sample microservices communicating with each other (account, customer), discovery server (Eureka) and API gateway (Zuul). It was implemented using Spring Boot and Spring Cloud frameworks. Its source code is available on GitHub. Spring Cloud has support for microservices discovery and gateway out of the box – we only have to define right dependencies inside maven project configuration file (pom.xml). The picture illustrating the adopted solution architecture is visible below. Both customer, account REST API services, discovery server and gateway running inside separated docker containers. Gateway is the entry point to the microservices system. It is interacting with all other services. It proxies requests to the selected microservices searching its addresses in discovery service. In case of existing more than one instance of each account or customer microservice the request is load balanced with  Ribbon  and  Feignclient. Account and customer services are registering themselves into the discovery server after startup. There is also a possibility of interaction between them, for example if we would like to find and return all customer’s account details.

Image title

I wouldn’t like to go into the details of those microservices implementation with Spring Boot and Spring Cloud frameworks. If you are interested in detailed description of the sample application development you can read it in my blog post here. Generally, Spring framework has a full support for microservices with all Netflix OSS tools like Ribbon, Hystrix and Eureka. In the blog post I described how to implement service discovery, distributed tracing, load balancing, logging trace ID propagation, API gateway for microservices with those solutions.

Dockerfiles

Each service in the sample source code has  Dockerfilewith docker image build definition. It’s really simple. Here’s Dockerfile for account service. We use openjdk as a base image. Jar file from target is added to the image and then run using java -jar command. Service is running on port 2222 which is exposed outside.

FROM openjdk
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ADD target/account-service.jar account-service.jar
ENTRYPOINT ["java", "-jar", "/account-service.jar"]
EXPOSE 2222

We also had to set main class in the JAR manifest. We achieve it using spring-boot-maven-plugin in module pom.xml. The fragment is visible below. We also set build finalName to cut off version number from target JAR file. Dockerfile and maven build definition is pretty similar for all other microservices.

<build>
  <finalName>account-service</finalName>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
      <version>1.5.2.RELEASE</version>
      <configuration>
        <mainClass>pl.piomin.microservices.account.Application</mainClass>
        <addResources>true</addResources>
      </configuration>
      <executions>
        <execution>
          <goals>
            <goal>repackage</goal>
          </goals>
        </execution>
      </executions>
    </plugin>
  </plugins>
</build>

Jenkins pipelines

We use Pipeline Plugin for building continous delivery for our microservices. In addition to the standard plugins set on Jenkins we also need Docker Pipeline Plugin by CloudBees. There are four pipelines defined as you can see in the picture below.

Image title

Here’s pipeline definition written in Groovy language for discovery service. We have 5 stages of execution. Inside Checkout stage we are pulling changes for remote Git repository of the project. Then project is build with mvn clean install command and also maven version is read from  pom.xml. In Image stage we build docker image from discovery service Dockerfile and then push that image to local registry. In the fourth step we are running built image with default port exposed and hostname visible for linked docker containers. Finally, account pipeline is started with no wait option, which means that source pipeline is finished and won’t wait for account pipeline execution finish.

node {

    withMaven(maven:'maven') {

        stage('Checkout') {
            git url: 'https://github.com/piomin/sample-spring-microservices.git', credentialsId: 'github-piomin', branch: 'master'
        }

        stage('Build') {
            sh 'mvn clean install'

            def pom = readMavenPom file:'pom.xml'
            print pom.version
            env.version = pom.version
        }

        stage('Image') {
            dir ('discovery-service') {
                def app = docker.build "localhost:5000/discovery-service:${env.version}"
                app.push()
            }
        }

        stage ('Run') {
            docker.image("localhost:5000/discovery-service:${env.version}").run('-p 8761:8761 -h discovery --name discovery')
        }

        stage ('Final') {
            build job: 'account-service-pipeline', wait: false
        }      

    }

}

Account pipeline is very similar. The main difference is inside fourth stage where account service container is linked to discovery container. We need to linked that containers, because account-service is registering itself in discovery server and must be able to connect it using hostname.

node {

    withMaven(maven:'maven') {

        stage('Checkout') {
            git url: 'https://github.com/piomin/sample-spring-microservices.git', credentialsId: 'github-piomin', branch: 'master'
        }

        stage('Build') {
            sh 'mvn clean install'

            def pom = readMavenPom file:'pom.xml'
            print pom.version
            env.version = pom.version
        }

        stage('Image') {
            dir ('account-service') {
                def app = docker.build "localhost:5000/account-service:${env.version}"
                app.push()
            }
        }

        stage ('Run') {
            docker.image("localhost:5000/account-service:${env.version}").run('-p 2222:2222 -h account --name account --link discovery')
        }

        stage ('Final') {
            build job: 'customer-service-pipeline', wait: false
        }      

    }

}

Similar pipelines are also defined for customer and gateway service. They are available in main project catalog on each microservice as  Jenkinsfile. Every image which is built during pipeline execution is also pushed to local Docker registry. To enable local registry on our host we need to pull and run Docker registry image and also use that registry address as an image name prefix while pulling or pushing. Local registry is exposed on its default 5000 port. You can see the list of pushed images to local registry by calling its REST API, for example http://localhost:5000/v2/_catalog.

docker run -d --name registry -p 5000:5000 registry

Testing

You should launch the build on discovery-service-pipeline. This pipeline will not only run build for discovery service but also call start next pipeline build (account-service-pipeline) at the end.The same rule is configured for account-service-pipeline which calls customer-service-pipeline and for customer-service-pipeline which call gateway-service-pipeline. So, after all pipelines finish you can check the list of running docker containers by calling  docker ps  command. You should have seen 5 containers: local registry and our four microservices. You can also check the logs of each container by running command  docker logs, for example  docker logs account. If everything works fine you should be able te call some service like http://localhost:2222/accounts or via Zuul gateway http://localhost:8765/account/account.

</div>
<div class="cm-replace _replace_51">CONTAINER ID        IMAGE                                           COMMAND                  CREATED             STATUS              PORTS                    NAMES
fa3b9e408bb4        localhost:5000/gateway-service:1.0-SNAPSHOT     "java -jar /gatewa..."   About an hour ago   Up About an hour    0.0.0.0:8765->8765/tcp   gateway
cc9e2b44fe44        localhost:5000/customer-service:1.0-SNAPSHOT    "java -jar /custom..."   About an hour ago   Up About an hour    0.0.0.0:3333->3333/tcp   customer
49657f4531de        localhost:5000/account-service:1.0-SNAPSHOT     "java -jar /accoun..."   About an hour ago   Up About an hour    0.0.0.0:2222->2222/tcp   account
fe07b8dfe96c        localhost:5000/discovery-service:1.0-SNAPSHOT   "java -jar /discov..."   About an hour ago   Up About an hour    0.0.0.0:8761->8761/tcp   discovery
f9a7691ddbba        registry</div>
<div class="cm-replace _replace_51">

Conclusion

I have presented the basic sample of Continuous Delivery environment for microservices using Docker and Jenkins. You can easily find out the limitations of presented solution, for example we has to linked docker containers with each other to enable communication between them or all of the tools and microservices are running on the same machine. For more advanced sample we could use Jenkins slaves running on different machines or docker containers (more here), tools like Kubernetes for orchestration and clustering, maybe Docker-in-Docker containers for simulating multiple docker machines. I hope that article is a fine introduction to the microservices Continuous Delivery and helps you to understand the basics of this idea. I think that you could expect more my advanced articles about that subject near the future.

Jenkins nodes on Docker containers

Jenkins is most popular an open source automation server written in Java. It has many interesting plugins and features. Today, I’m going to show you one of them – how to set up Jenkins master server with one slave instance connected to master. So that we will be able to run distributed builds using few docker containers. For that sample we use docker images of Jenkins (jenkins) and Jenkins slave (jenkinsci/jnlp-slave). Let’s start from running Jenkins docker container.

docker run -d --name jenkins -p 50000:50000 -p 50080:8080 jenkins

Go to management console (http://192.168.99.100:50080) and select Manage Jenkins -> Manage Nodes and then click New Node. In the next page you have to put the slave name – for that sample is slave-1. After clicking OK you will see new node on the list. Now, you can configure it by clicking setting button and display node details by clicking node name on the list.

jenkins-slave

New node is created by is still disabled. After clicking node you will see the page with details. The important information is in command secret line property. Copy that token.

jenkins-slave1

Now, we are going to run docker image with JNLP agent. In the docker run command we paste Jenkins master URL, secret token and chosen node name (slave-1). If you would like to set up it without docker container you should download slave agent JAR file by clicking Launch button and run agent from command line like in the picture above.

docker run -d --name jenkins-slave1 jenkinsci/jnlp-slave -url http://192.168.99.100:50080 5d681c12e9c68f14373d62375e852d0874ea9daeca3483df4c858ad3556d406d slave-1

After running slave container you should see name slave-1 in the Build Executor Status below master node.

jenkins-slave2

Now, we could configure sample Jenkins pipeline to test our new slave. Pipeline builds could be ran on master node or on slave node. Here sample pipeline fragment. For trying that sample you need to have Pipeline Plugin installed on your Jenkins server.

node() {
    stage('Checkout') {
        ...
    }

    stage('Build') {
        ...
    }
}

You can select the node for running your pipeline by providing node name. Now, build always run on slave-1 node.

node('slave-1') {
    stage('Checkout') {
        ...
    }

    stage('Build') {
        ...
    }
}

How to setup Continuous Delivery environment

I have already read some interesting articles and books about Continuous Delivery, because I had to setup it inside my organization. The last document about this subject I can recommend is DZone Guide to DevOps. If you interested in this area of software development it can be really enlightening reading for you. The main purpose of my article is to show rather practical site of Continuous Delivery – tools which can be used to build such environment. I’m going to show how to build professional Continuous Delivery environment using:

  • Jenkins – most popular open source automation server
  • GitLab – web-based Git repository manager
  • Artifactory – open source Maven repository manager
  • Ansible – simple open source automation engine
  • SonarQube – open source platform for continuous code quality

Here’s picture showing our continuous delivery environment.

continuous_delivery

The changes pushed to Git repository managed by GitLab server are automatically propagated to Jenkins using webhook. We enable push and merge request triggers. SSL verification will be disabled. In the URL field we have to put jenkins pipeline address with authentication credentials (user and password) and secret token. This API token which is visible in jenkins user profile under Configure tab.

webhookHere’s jenkins pipeline configuration in ‘Build triggers’ section. We have to enable option ‘Build when a change is pushed to GitLab‘. GitLab CI Service URL is the address we have already set in GitLab webhook configuration. There are push and merge request enabled from all branches. It can also be added additional restriction for branch filtering: by name or by regex. To support such kind of trigger in jenkins you need have Gitlab plugin installed.

jenkins

There are two options of events which trigger jenkins build:

  • push – change in source code is pushed to git repository
  • merge request –  change in source code is pushed to one branch and then committer creates merge request to the build branch from GitLab management console

In case you would like to use first option you have to disable build branch protection to enable direct push to that branch. In case of using merge request branch protection need to be activated.

protection

Merge request from GitLab console is very intuitive. Under section ‘Merge request’ we are selecting source and target branch and confirm action.

merge

Ok, many words about GitLab and Jenkins integration… Now you know how to configure it. You only have to decide if you prefer push or merge request in your continuous delivery configuration. Merge request is used for code review in Gitlab – so it is useful additional step in your continuous pipeline. Let’s move on. We have to install some other plugins in jenkins to integrate it with Artifactory, SonarQube and Ansible. Here’s the full list of jenkins plugins I used for continuous delivery process inside my organization:

Here’s configuration on my jenkins pipeline for sample maven project.

node {

    withEnv(["PATH+MAVEN=${tool 'Maven3'}bin"]) {

        stage('Checkout') {
            def branch = env.gitlabBranch
            env.branch = branch
            git url: 'http://172.16.42.157/minkowp/start.git', credentialsId: '5693747c-2f45-4557-ada2-a1da9bbfe0af', branch: branch
        }

        stage('Test') {
            def pom = readMavenPom file: 'pom.xml'
            print "Build: " + pom.version
            env.POM_VERSION = pom.version
            sh 'mvn clean test -Dmaven.test.failure.ignore=true'
            junit '**/target/surefire-reports/TEST-*.xml'
            currentBuild.description = "v${pom.version} (${env.branch})"
        }

        stage('QA') {
            withSonarQubeEnv('sonar') {
                sh 'mvn org.sonarsource.scanner.maven:sonar-maven-plugin:3.2:sonar'
            }
        }

        stage('Build') {
            def server = Artifactory.server "server1"
            def buildInfo = Artifactory.newBuildInfo()
            def rtMaven = Artifactory.newMavenBuild()
            rtMaven.tool = 'Maven3'
            rtMaven.deployer releaseRepo:'libs-release-local', snapshotRepo:'libs-snapshot-local', server: server
            rtMaven.resolver releaseRepo:'remote-repos', snapshotRepo:'remote-repos', server: server
            rtMaven.run pom: 'pom.xml', goals: 'clean install -Dmaven.test.skip=true', buildInfo: buildInfo
            publishBuildInfo server: server, buildInfo: buildInfo
        }

        stage('Deploy') {
            dir('ansible') {
                ansiblePlaybook playbook: 'preprod.yml'
            }
            mail from: 'ci@example.com', to: 'piotr.minkowski@play.pl', subject: "Nowa wersja start: '${env.POM_VERSION}'", body: "Wdrożono nowa wersję start '${env.POM_VERSION}' na środowisku preprodukcyjnym."
        }

    }
}

There are five stages in my pipeline:

  1. Checkout – source code checkout from git branch. Branch name is sent as parameter by GitLab webhook
  2. Test – running JUnit test and creating test report visible in jenkins and changing job description
  3. QA – running source code scanning using SonarQube scanner
  4. Build – build package resolving artifacts from Artifactory and publishing new application release to Artifactory
  5. Deploy – deploying application package and configuration on server using ansible

Following Ansible website it is a simple automation language that can perfectly describe an IT application infrastructure. It’s easy-to-learn, self-documenting, and doesn’t require a grad-level computer science degree to read. Ansible using SSH keys to authenticate on the remote host. So you have to put your SSH key to authorized_keys file in the remote host before running ansible commands on it. The main idea of that that is to create playbook with set of ansible commands. Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process. Here is catalog structure with ansible configuration for application deploy.

start_ansible

 

 

 

 

 

 

Here’s my ansible playbook code. It defines remote host, user to connect and role name. This file is used inside jenkins pipeline on ansiblePlaybook step.

---
- hosts: pBPreprod
  remote_user: default

  roles:
    - preprod

Here’s main.yml file where we define set of ansible commands to on remote server.

---
- block:
  - name: Copy configuration file
    template: src=config.yml.j2 dest=/opt/start/config.yml

  - name: Copy jar file
    copy: src=../target/start.jar dest=/opt/start/start.jar

  - name: Run jar file
    shell: java -jar /opt/start/start.jar

You can check out build results on jenkins console. There is also fine pipeline visualization with stage execution time. Each build history record has link to Artifactory build information and SonarQube scanner report.

jenkins

Continuous configuration management with Jenkins and Liquibase

An important aspect of Continuous Delivery is application configuration management. Configuration is often stored in the database, especially for more complex business applications. The ability to automatically update changes and rollback them in case of new application version rollback seems to be very important for devops teams. Recently, I had an opportunity to use powerful tool for tracking, managing and applying database schema changes – Liquibase. This tool has many interesting features like advanced support for rollback, tagging and filtering changes to run. It can be used from Maven, Spring, Jenkins and has hibernate support. Today, I’m going to show you how to use liquibase for changes update and rollback in database using maven plugin and also jenkins plugin.

Sample code is available at Github. We use liquibase-maven-plugin for calling liquibase during maven build execution. Here’s plugin configuration in pom.xml.

<build>
<plugins>
<plugin>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-maven-plugin</artifactId>
<version>3.5.3</version>
<configuration>
<propertyFile>src/main/resources/liquibase.properties</propertyFile>
</configuration>
<executions>
<execution>
<phase>process-resources</phase>
<goals>
<goal>update</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>

Here is properties configuration file with database settings and liquibase changeset location.

changeLogFile: src/main/script/changelog-master.xml
driver: com.mysql.jdbc.Driver
url: jdbc:mysql://192.168.99.100:33306/default?useSSL=false
username: default
password: default
verbose: true
dropFirst: false

In the changelog-master.xml file database changes are listed.  It is XML based, but there is also support for YAML, JSON, SQL and even groovy language. Following liquibase.org site the best practise is to organize your changelogs by major release.

<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.1.xsd">

<include file="src/main/script/changelog-1.0.xml" />
<include file="src/main/script/changelog-1.1.xml" />
<include file="src/main/script/changelog-1.2.xml" />

</databaseChangeLog> 

Here’s changelog-1.0.xml. We’re going to create one table person with some columns and insert there some example rows. Inside rollback tag we place configuration for rolling back those changes – drop of newly created table.

<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog/1.9"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog/1.9 http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-1.9.xsd">

<changeSet author="minkowski" id="1.0">
<createTable tableName="person">
<column name="id" type="int">
<constraints primaryKey="true" />
</column>
<column name="first_name" type="varchar(20)" />
<column name="last_name" type="varchar(50)" />
<column name="age" type="int" />
</createTable>
<addAutoIncrement tableName="person" columnName="id" columnDataType="int" />
<rollback>
<dropTable tableName="person" />
</rollback>
</changeSet>

</databaseChangeLog> 

We apply our changes to database by running maven command on the project. It’s also important to place mysql-connector dependency in pom.xml to enable MySQL driver in the project classpath.

mvn package liquibase:update

Now if your build was succesfully finished you can check out changes which were commited in database. There also should been created table DATABASECHANGELOG with history of changes performed using liquibase. The changes can be rollbacked using  mvn command. You can set rollback date, number of versions to rollback or tag name to roll the database back to.

mvn package liquibase:rollback

There is also support for liquibase in Jenkins provided by liquibase-runner plugin. It has pipeline support from 1.2.0 version. First you need to download this plugin Manage Jenkins -> Manage plugins section. Then you call it from your pipeline. Here are example pipelines for update and rollback changes.

node {
stage('Checkout') {
git url: 'https://github.com/piomin/sample-liquibase-maven.git', credentialsId: 'piomin_gitlab', branch: 'master'
}

stage('Update') {
liquibaseUpdate changeLogFile: 'src/main/script/changelog-master.xml', url: 'jdbc:mysql://192.168.99.100:33306/default?useSSL=false', credentialsId: 'mysql_default', databaseEngine: 'MySQL'
}
}
node {
stage('Checkout') {
git url: 'https://github.com/piomin/sample-liquibase-maven.git', credentialsId: 'piomin_gitlab', branch: 'master'
}

stage('Rollback') {
liquibaseRollback changeLogFile: 'src/main/script/changelog-master.xml', url: 'jdbc:mysql://192.168.99.100:33306/default?useSSL=false', credentialsId: 'mysql_default', databaseEngine: 'MySQL', rollbackCount: 2
}
}

In case if someone is not very familiar with jenkins  – credentialsId need to be configured in jenkins ‘Credentials’ section like in the picture below and call by ID inside pipeline. In the first step of each pipeline named ‘Checkout’ we are cloning Git repository from github.com. In the second we are calling methods of liquibase jenkins plugin and passing arguments same as we set in properties file for maven plugin. We are calling liquibaseRollback method with rollbackCount=2, which means that two versions of changeset will be rollback 1.2 and 1.1 from my sample configuration available on github.

jenkins

Unfortunately liquibase-runner using has not support for Oracle database engine in pipeline. But I hope it will be fixed in the future: issue reported by me 🙂