How to setup Continuous Delivery environment

I have already read some interesting articles and books about Continuous Delivery, because I had to setup it inside my organization. The last document about this subject I can recommend is DZone Guide to DevOps. If you interested in this area of software development it can be really enlightening reading for you. The main purpose of my article is to show rather practical site of Continuous Delivery – tools which can be used to build such environment. I’m going to show how to build professional Continuous Delivery environment using:

  • Jenkins – most popular open source automation server
  • GitLab – web-based Git repository manager
  • Artifactory – open source Maven repository manager
  • Ansible – simple open source automation engine
  • SonarQube – open source platform for continuous code quality

Here’s picture showing our continuous delivery environment.

continuous_delivery

The changes pushed to Git repository managed by GitLab server are automatically propagated to Jenkins using webhook. We enable push and merge request triggers. SSL verification will be disabled. In the URL field we have to put jenkins pipeline address with authentication credentials (user and password) and secret token. This API token which is visible in jenkins user profile under Configure tab.

webhookHere’s jenkins pipeline configuration in ‘Build triggers’ section. We have to enable option ‘Build when a change is pushed to GitLab‘. GitLab CI Service URL is the address we have already set in GitLab webhook configuration. There are push and merge request enabled from all branches. It can also be added additional restriction for branch filtering: by name or by regex. To support such kind of trigger in jenkins you need have Gitlab plugin installed.

jenkins

There are two options of events which trigger jenkins build:

  • push – change in source code is pushed to git repository
  • merge request –  change in source code is pushed to one branch and then committer creates merge request to the build branch from GitLab management console

In case you would like to use first option you have to disable build branch protection to enable direct push to that branch. In case of using merge request branch protection need to be activated.

protection

Merge request from GitLab console is very intuitive. Under section ‘Merge request’ we are selecting source and target branch and confirm action.

merge

Ok, many words about GitLab and Jenkins integration… Now you know how to configure it. You only have to decide if you prefer push or merge request in your continuous delivery configuration. Merge request is used for code review in Gitlab – so it is useful additional step in your continuous pipeline. Let’s move on. We have to install some other plugins in jenkins to integrate it with Artifactory, SonarQube and Ansible. Here’s the full list of jenkins plugins I used for continuous delivery process inside my organization:

Here’s configuration on my jenkins pipeline for sample maven project.

node {

    withEnv(["PATH+MAVEN=${tool 'Maven3'}bin"]) {

        stage('Checkout') {
            def branch = env.gitlabBranch
            env.branch = branch
            git url: 'http://172.16.42.157/minkowp/start.git', credentialsId: '5693747c-2f45-4557-ada2-a1da9bbfe0af', branch: branch
        }

        stage('Test') {
            def pom = readMavenPom file: 'pom.xml'
            print "Build: " + pom.version
            env.POM_VERSION = pom.version
            sh 'mvn clean test -Dmaven.test.failure.ignore=true'
            junit '**/target/surefire-reports/TEST-*.xml'
            currentBuild.description = "v${pom.version} (${env.branch})"
        }

        stage('QA') {
            withSonarQubeEnv('sonar') {
                sh 'mvn org.sonarsource.scanner.maven:sonar-maven-plugin:3.2:sonar'
            }
        }

        stage('Build') {
            def server = Artifactory.server "server1"
            def buildInfo = Artifactory.newBuildInfo()
            def rtMaven = Artifactory.newMavenBuild()
            rtMaven.tool = 'Maven3'
            rtMaven.deployer releaseRepo:'libs-release-local', snapshotRepo:'libs-snapshot-local', server: server
            rtMaven.resolver releaseRepo:'remote-repos', snapshotRepo:'remote-repos', server: server
            rtMaven.run pom: 'pom.xml', goals: 'clean install -Dmaven.test.skip=true', buildInfo: buildInfo
            publishBuildInfo server: server, buildInfo: buildInfo
        }

        stage('Deploy') {
            dir('ansible') {
                ansiblePlaybook playbook: 'preprod.yml'
            }
            mail from: 'ci@example.com', to: 'piotr.minkowski@play.pl', subject: "Nowa wersja start: '${env.POM_VERSION}'", body: "Wdrożono nowa wersję start '${env.POM_VERSION}' na środowisku preprodukcyjnym."
        }

    }
}

There are five stages in my pipeline:

  1. Checkout – source code checkout from git branch. Branch name is sent as parameter by GitLab webhook
  2. Test – running JUnit test and creating test report visible in jenkins and changing job description
  3. QA – running source code scanning using SonarQube scanner
  4. Build – build package resolving artifacts from Artifactory and publishing new application release to Artifactory
  5. Deploy – deploying application package and configuration on server using ansible

Following Ansible website it is a simple automation language that can perfectly describe an IT application infrastructure. It’s easy-to-learn, self-documenting, and doesn’t require a grad-level computer science degree to read. Ansible using SSH keys to authenticate on the remote host. So you have to put your SSH key to authorized_keys file in the remote host before running ansible commands on it. The main idea of that that is to create playbook with set of ansible commands. Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process. Here is catalog structure with ansible configuration for application deploy.

start_ansible

 

 

 

 

 

 

Here’s my ansible playbook code. It defines remote host, user to connect and role name. This file is used inside jenkins pipeline on ansiblePlaybook step.

---
- hosts: pBPreprod
  remote_user: default

  roles:
    - preprod

Here’s main.yml file where we define set of ansible commands to on remote server.

---
- block:
  - name: Copy configuration file
    template: src=config.yml.j2 dest=/opt/start/config.yml

  - name: Copy jar file
    copy: src=../target/start.jar dest=/opt/start/start.jar

  - name: Run jar file
    shell: java -jar /opt/start/start.jar

You can check out build results on jenkins console. There is also fine pipeline visualization with stage execution time. Each build history record has link to Artifactory build information and SonarQube scanner report.

jenkins

Advertisements

Continuous configuration management with Jenkins and Liquibase

An important aspect of Continuous Delivery is application configuration management. Configuration is often stored in the database, especially for more complex business applications. The ability to automatically update changes and rollback them in case of new application version rollback seems to be very important for devops teams. Recently, I had an opportunity to use powerful tool for tracking, managing and applying database schema changes – Liquibase. This tool has many interesting features like advanced support for rollback, tagging and filtering changes to run. It can be used from Maven, Spring, Jenkins and has hibernate support. Today, I’m going to show you how to use liquibase for changes update and rollback in database using maven plugin and also jenkins plugin.

Sample code is available at Github. We use liquibase-maven-plugin for calling liquibase during maven build execution. Here’s plugin configuration in pom.xml.

<build>
<plugins>
<plugin>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-maven-plugin</artifactId>
<version>3.5.3</version>
<configuration>
<propertyFile>src/main/resources/liquibase.properties</propertyFile>
</configuration>
<executions>
<execution>
<phase>process-resources</phase>
<goals>
<goal>update</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>

Here is properties configuration file with database settings and liquibase changeset location.

changeLogFile: src/main/script/changelog-master.xml
driver: com.mysql.jdbc.Driver
url: jdbc:mysql://192.168.99.100:33306/default?useSSL=false
username: default
password: default
verbose: true
dropFirst: false

In the changelog-master.xml file database changes are listed.  It is XML based, but there is also support for YAML, JSON, SQL and even groovy language. Following liquibase.org site the best practise is to organize your changelogs by major release.

<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.1.xsd">

<include file="src/main/script/changelog-1.0.xml" />
<include file="src/main/script/changelog-1.1.xml" />
<include file="src/main/script/changelog-1.2.xml" />

</databaseChangeLog> 

Here’s changelog-1.0.xml. We’re going to create one table person with some columns and insert there some example rows. Inside rollback tag we place configuration for rolling back those changes – drop of newly created table.

<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog/1.9"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog/1.9 http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-1.9.xsd">

<changeSet author="minkowski" id="1.0">
<createTable tableName="person">
<column name="id" type="int">
<constraints primaryKey="true" />
</column>
<column name="first_name" type="varchar(20)" />
<column name="last_name" type="varchar(50)" />
<column name="age" type="int" />
</createTable>
<addAutoIncrement tableName="person" columnName="id" columnDataType="int" />
<rollback>
<dropTable tableName="person" />
</rollback>
</changeSet>

</databaseChangeLog> 

We apply our changes to database by running maven command on the project. It’s also important to place mysql-connector dependency in pom.xml to enable MySQL driver in the project classpath.

mvn package liquibase:update

Now if your build was succesfully finished you can check out changes which were commited in database. There also should been created table DATABASECHANGELOG with history of changes performed using liquibase. The changes can be rollbacked using  mvn command. You can set rollback date, number of versions to rollback or tag name to roll the database back to.

mvn package liquibase:rollback

There is also support for liquibase in Jenkins provided by liquibase-runner plugin. It has pipeline support from 1.2.0 version. First you need to download this plugin Manage Jenkins -> Manage plugins section. Then you call it from your pipeline. Here are example pipelines for update and rollback changes.

node {
stage('Checkout') {
git url: 'https://github.com/piomin/sample-liquibase-maven.git', credentialsId: 'piomin_gitlab', branch: 'master'
}

stage('Update') {
liquibaseUpdate changeLogFile: 'src/main/script/changelog-master.xml', url: 'jdbc:mysql://192.168.99.100:33306/default?useSSL=false', credentialsId: 'mysql_default', databaseEngine: 'MySQL'
}
}
node {
stage('Checkout') {
git url: 'https://github.com/piomin/sample-liquibase-maven.git', credentialsId: 'piomin_gitlab', branch: 'master'
}

stage('Rollback') {
liquibaseRollback changeLogFile: 'src/main/script/changelog-master.xml', url: 'jdbc:mysql://192.168.99.100:33306/default?useSSL=false', credentialsId: 'mysql_default', databaseEngine: 'MySQL', rollbackCount: 2
}
}

In case if someone is not very familiar with jenkins  – credentialsId need to be configured in jenkins ‘Credentials’ section like in the picture below and call by ID inside pipeline. In the first step of each pipeline named ‘Checkout’ we are cloning Git repository from github.com. In the second we are calling methods of liquibase jenkins plugin and passing arguments same as we set in properties file for maven plugin. We are calling liquibaseRollback method with rollbackCount=2, which means that two versions of changeset will be rollback 1.2 and 1.1 from my sample configuration available on github.

jenkins

Unfortunately liquibase-runner using has not support for Oracle database engine in pipeline. But I hope it will be fixed in the future: issue reported by me 🙂

Launch microservice in Docker container

Docker, Microservices and Continuous Delivery are increasingly popular topics among modern development teams. Today I’m going to create simple microservice and present you how to run it in Docker container using Maven plugin or Jenkins pipeline. Let’s start from application code which is available on https://github.com/piomin/sample-docker-microservice.git. It has only one endpoint for searching all persons and single person by id. Here’s controller code:

package pl.piomin.microservices.person;

import java.util.ArrayList;
import java.util.List;
import java.util.logging.Logger;

import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class Api {

protected Logger logger = Logger.getLogger(Api.class.getName());

private List<Person> persons;

public Api() {
persons = new ArrayList<>();
persons.add(new Person(1, "Jan", "Kowalski", 22));
persons.add(new Person(1, "Adam", "Malinowski", 33));
persons.add(new Person(1, "Tomasz", "Janowski", 25));
persons.add(new Person(1, "Alina", "Iksińska", 54));
}

@RequestMapping("/person")
public List<Person> findAll() {
logger.info("Api.findAll()");
return persons;
}

@RequestMapping("/person/{id}")
public Person findById(@PathVariable("id") Integer id) {
logger.info(String.format("Api.findById(%d)", id));
return persons.stream().filter(p -> (p.getId().intValue() == id)).findAny().get();
}

}

We need to have Docker installed on our machine and Docker Registry container running on port 5000. If you are interested in commercial support, there is also Docker Trusted Registry provides an image registry and same other features like LDAP/Active Directory integration, security certificates.

docker run -d --name registry -p 5000:5000 registry:latest

We use openjdk as a base image for our new microservice image defined in Dockerfile. Application JAR file will be launched in java command and exposed on port 2222.

FROM openjdk
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ADD sample-docker-microservice-1.0-SNAPSHOT.jar person-service.jar
ENTRYPOINT ["java", "-jar", "/person-service.jar"]
EXPOSE 2222

We use docker-maven-plugin to configure image building process inside pom.xml. There is no need for using Dockerfile with that plugin. It has equivalent tags in configuration which could be use instead of Dockerfile entries. Our example is based on Dockerfile.

<plugin>
<groupId>com.spotify</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.4.13</version>
<configuration>
<imageName>${docker.image.prefix}/${project.artifactId}</imageName>
<imageTags>${project.version}</imageTags>
<dockerDirectory>src/main/docker</dockerDirectory>
<dockerHost>https://192.168.99.100:2376</dockerHost>
<dockerCertPath>C:\Users\minkowp\.docker\machine\machines\default</dockerCertPath>
<resources>
<resource>
<targetPath>/</targetPath>
<directory>${project.build.directory}</directory>
<include>${project.build.finalName}.jar</include>
</resource>
</resources>
</configuration>
</plugin>

Finally, we can build our code using Maven command.

mvn clean package docker:build

After running maven command the images is tagged and pushed to local repository.

docker tag e106e5bf3d57 localhost:5000/microservices/sample-docker-microservice:1.0-SNAPSHOT
docker push localhost:5000/microservices/sample-docker-microservice:1.0-SNAPSHOT

Application images now is registered in local Docker Registry. Optionally, we could push it docker.io or to enterprise Docker Trusted Registry. We can check it out using API available at http://192.168.99.100:5000/v2/_catalog. Here’s Docker command for running with newly created image stored in local register. Service is available at http://192.168.99.100:2222/person/.

docker run -d --name sample1 -p 2222:2222 microservice/sample-docker-microservice:1.0-SNAPSHOT

 

Part 1: Creating microservice using Spring Cloud, Eureka and Zuul

Spring framework provides set of libraries for creating microservices in Java. They are a part of Spring Cloud project. Today I’m going to show you how to create simple microservices using Spring Boot and following technologies:

  • Zuul –  gateway service that provides dynamic routing, monitoring, resiliency, security, and more
  • Ribbon – client side load balancer
  • Feign – declarative REST client
  • Eureka – service registration and discovery
  • Sleuth – distributed tracing via logs
  • Zipkin – distributed tracing system with request visualization.

Sample application is available at https://github.com/piomin/sample-spring-microservices.git. Here’s picture with application architecture. Client calls endpoint available inside customer-service which stores basic customer data via Zuul gateway. This endpoint interacts with account-service to collect information about customer accounts served by endpoint in account-service. Each service registering itself on Eureka discovery service and sending its logs to Zipkin using spring-cloud-sleuth.

san1s57hfsas5v53ms53

This is account-service controller. We use findByCustomer method for collecting customer accounts by his id.

@RestController
public class Api {
	private List<Account> accounts;

	protected Logger logger = Logger.getLogger(Api.class.getName());

	public Api() {
		accounts = new ArrayList<>();
		accounts.add(new Account(1, 1, "111111"));
		accounts.add(new Account(2, 2, "222222"));
		accounts.add(new Account(3, 3, "333333"));
		accounts.add(new Account(4, 4, "444444"));
		accounts.add(new Account(5, 1, "555555"));
		accounts.add(new Account(6, 2, "666666"));
		accounts.add(new Account(7, 2, "777777"));
	}

	@RequestMapping("/accounts/{number}")
	public Account findByNumber(@PathVariable("number") String number) {
		logger.info(String.format("Account.findByNumber(%s)", number));
		return accounts.stream().filter(it -> it.getNumber().equals(number)).findFirst().get();
	}

	@RequestMapping("/accounts/customer/{customer}")
	public List<Account> findByCustomer(@PathVariable("customer") Integer customerId) {
		logger.info(String.format("Account.findByCustomer(%s)", customerId));
		return accounts.stream().filter(it -> it.getCustomerId().intValue()==customerId.intValue()).collect(Collectors.toList());
	}

	@RequestMapping("/accounts")
	public List<Account> findAll() {
		logger.info("Account.findAll()");
		return accounts;
	}
}

This is customer-service controller. There is findById method which interacts with account-service using Feign client.

@RestController
public class Api {

	@Autowired
	private AccountClient accountClient;

	protected Logger logger = Logger.getLogger(Api.class.getName());

	private List<Customer> customers;

	public Api() {
		customers = new ArrayList<>();
		customers.add(new Customer(1, "12345", "Adam Kowalski", CustomerType.INDIVIDUAL));
		customers.add(new Customer(2, "12346", "Anna Malinowska", CustomerType.INDIVIDUAL));
		customers.add(new Customer(3, "12347", "Paweł Michalski", CustomerType.INDIVIDUAL));
		customers.add(new Customer(4, "12348", "Karolina Lewandowska", CustomerType.INDIVIDUAL));
	}

	@RequestMapping("/customers/pesel/{pesel}")
	public Customer findByPesel(@PathVariable("pesel") String pesel) {
		logger.info(String.format("Customer.findByPesel(%s)", pesel));
		return customers.stream().filter(it -> it.getPesel().equals(pesel)).findFirst().get();
	}

	@RequestMapping("/customers")
	public List<Customer> findAll() {
		logger.info("Customer.findAll()");
		return customers;
	}

	@RequestMapping("/customers/{id}")
	public Customer findById(@PathVariable("id") Integer id) {
		logger.info(String.format("Customer.findById(%s)", id));
		Customer customer = customers.stream().filter(it -> it.getId().intValue()==id.intValue()).findFirst().get();
		List<Account> accounts = accountClient.getAccounts(id);
		customer.setAccounts(accounts);
		return customer;
	}
}
@FeignClient("account-service")
public interface AccountClient {

	@RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}")
	List<Account> getAccounts(@PathVariable("customerId") Integer customerId);

}

To be able to using Feign client we only have to enable it in our main class.


package pl.piomin.microservices.customer;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.feign.EnableFeignClients;

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
public class Application {

	public static void main(String[] args) {
		SpringApplication.run(Application.class, args);
	}
}

There is also important configuration inside application.yml in customer-service. The ribbon load balancer needs to be enabled and also I suggest to set lease reneval and expiration on Eureka client to enable unregistration from discovery service when our service is shutting down.


server:
	port: ${PORT:3333}

eureka:
	client:
		serviceUrl:
			defaultZone: ${vcap.services.discovery-service.credentials.uri:http://127.0.0.1:8761}/eureka/
	instance:
		leaseRenewalIntervalInSeconds: 1
		leaseExpirationDurationInSeconds: 2

ribbon:
	eureka:
		enabled: true

Ok, fine. We’ve got our two microservices implemented and configured. But first we have to create and run discovery service based on Eureka server. This functionality is provided by our discovery-service. We only have to import one dependency in our pom.xml called spring-cloud-starter-eureka-server and enable it in application main class using @EnableEurekaServer annotation. Here is configuration of Eureka server in application.yml file:


server:
	port: ${PORT:8761}

eureka:
	instance:
		hostname: localhost
	client:
		registerWithEureka: false
		fetchRegistry: false
	server:
		enableSelfPreservation: false

After running discovery-service we see its monitoring console available on 8761 port. And now let’s run our two microservices on default ports set in their application.yml configuration file and more two instances of them on another ports using -DPORT VM argument, for example account-service on port 2223, and customer-service on port 3334. Now we take o look on Eureka monitoring console: we’ve got two instances of account-service running on 2222, 2223 ports and two instances of customer-service running on 3333, 3334 ports.

eureka

We have two instances of each microservice registered on discovery server. But we need to hide our system complexity to the outside world. There should be only one IP address exposed on one port available for inbound clients. That’s why we need API gateway – Zuul. Zuul will forward our request to the specific microservice based on its proxy configuration. Such request will also be load balances by ribbon client. To enable Zuul gateway dependency spring-cloud-starter-zuul should be added inside pom.xml and annotation @EnableZuulProxy in the main class. This is Zuul configuration for ourservices set in application.yml.


server:
	port: 8765

zuul:
	prefix: /api
	routes:
		account:
			path: /account/**
			serviceId: account-service
		customer:
			path: /customer/**
			serviceId: customer-service 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>

...

Like we see Zuul is configured to be available under its default port 8765 and it forwards requests from /api/account/ path to account-service and from /api/customer/ to customer-service. When URL http://localhost:8765/api/customer/customers/1 is call several times we’ll see its load balanced between two instances of each microservice. Also when we shut down one of microservice instance we can take o look that it is unregistered from Eureka server.

In the second part of article I’ll present how to use Spring Cloud Sleuth, Zipkin and ELK. If you are interested in see Part 2: Creating microservices – monitoring with Spring Cloud Sleuth, ELK and Zipkin.

How to ship logs with Logstash, Elasticsearch and RabbitMQ

Here’s simple picture of our solution. We’ll start from sample Spring Boot application shipping logs to RabbitMQ exchange. Then using Docker, we’ll configure environment containing RabbitMQ, Logstash, Elasticsearch and Kibana – each running on separated Docker container.

sscg9hyasgmdht1k46653

My sample Java application is available on https://github.com/piomin/sample-amqp-logging.git.

There are only two Spring Boot dependencies needed inside pom.xml. First for REST controller and second for AMQP dependencies.

<dependencies>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-data-rest</artifactId>
	</dependency>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-amqp</artifactId>
	</dependency>
</dependencies>

Here’s simple controller with one logging message.

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class Controller {

 protected Logger logger = LoggerFactory.getLogger(Controller.class.getName());

 @RequestMapping("/hello/{param}")
 public String hello(@PathVariable("param") String param) {
  logger.info("Controller.hello(" + param + ")");
  return "Hello";
 }

}

I use logback as logger implementation and Spring AMQP appender for sending logs to RabbitMQ over AMQP protocol.

<appender name="AMQP" class="org.springframework.amqp.rabbit.logback.AmqpAppender">
	<layout>
		<pattern>
			{
			"time": "%date{ISO8601}",
			"thread": "%thread",
			"level": "%level",
			"class": "%logger{36}",
			"message": "%message"
			}
		</pattern>
	</layout>

	<!-- RabbitMQ connection -->
	<host>192.168.99.100</host>
	<port>30000</port>
	<username>guest</username>
	<password>guest</password>

	<applicationId>api-service-4</applicationId>
	<routingKeyPattern>api-service-4</routingKeyPattern>
	<declareExchange>true</declareExchange>
	<exchangeType>direct</exchangeType>
	<exchangeName>ex_logstash</exchangeName>

	<generateId>true</generateId>
	<charset>UTF-8</charset>
	<durable>true</durable>
	<deliveryMode>PERSISTENT</deliveryMode>
</appender>

I run RabbitMQ server using docker image https://hub.docker.com/_/rabbitmq/. Here’s docker command for it. I choosed rabbitmq:management docker image to enable expose of RabbitMQ UI management console on port 30001. After running this command we can go to management console available on 192.168.99.100:30001. There we have to create queue named q_logstash and direct exchange named ex_logstach having routing set to q_logstash queue.

docker run -d -it --name rabbit --hostname rabbit -p 30000:5672 -p 30001:15672
 rabbitmq:management

 

rabbit
RabbitMQ management console with exchange and queue binding

Then we run Elasticsearch and Kibana docker images. Kibana container need to be linked to elasticsearch.

docker run -d -it --name es -p 9200:9200 -p 9300:9300 elasticsearch
docker run -d -it --name kibana --link es:elasticsearch -p 5601:5601 kibana

Finally we can run Logstash docker image which get RabbitMQ queue as input and set Elasticsearch api as output. We have to change host to docker machine default address and port configured when running RabbitMQ container. Also we have durable queue so it has to be changed because default value for that is false following this reference:https://www.elastic.co/guide/en/logstash/current/plugins-inputs-rabbitmq.html

docker run -d -it --name logstash logstash -e 'input { rabbitmq {
host => "192.168.99.100" port => 30000 durable => true } }
output { elasticsearch { hosts => ["192.168.99.100"] } }'

After running all docker containers for RabbitMQ, Logstash, Elasticsearch and Kibana we can run our sample Spring Boot application and see logs on Kibana available on http://192.168.99.100:5601.