Logging with Spring Boot and Elastic Stack

In this article I’ll introduce my library for logging designed especially for Spring Boot RESTful web application. The main assumptions regarding this library are:

  • Logging all incoming HTTP requests and outgoing HTTP responses with full body
  • Integration with Elastic Stack through Logstash using logstash-logback-encoder library
  • Possibility for enabling logging on a client-side for most commonly used components in Spring Boot application: RestTemplate and OpenFeign
  • Generating and propagating correlationId across all communication within a single API endpoint call
  • Calculating and storing execution time for each request
  • A library should be auto-configurable – you don’t have to do anything more than including it as a dependency to your application to make it work

Continue reading “Logging with Spring Boot and Elastic Stack”

Advertisements

Performance Comparison Between Spring Boot and Micronaut

Today we will compare two frameworks used for building microservices on the JVM: Spring Boot and Micronaut. First of them, Spring Boot is currently the most popular and opinionated framework in the JVM world. On the other side of the barrier is staying Micronaut, quickly gaining popularity framework especially designed for building serverless functions or low memory-footprint microservices. We will be comparing version 2.1.4 of Spring Boot with 1.0.0.RC1 of Micronaut. The comparison criteria are:

  • memory usage (heap and non-heap)
  • the size in MB of generated fat JAR file
  • the application startup time
  • the performance of application, in the meaning of average response time from the REST endpoint during sample load testing

Continue reading “Performance Comparison Between Spring Boot and Micronaut”

Elasticsearch with Spring Boot

Elasticsearch is a full-text search engine especially designed for working with large data sets. Following this description it is a natural choice to use it for storing and searching application logs. Together with Logstash and Kibana it is a part of powerful solution called Elastic Stack, that has already been described in some of my previous articles.
Keeping application logs is not the only one use case for Elasticsearch. It is often used as a secondary database for the application, that has primary relational database. Such an approach can be especially useful if you have to perform full-text search over large data set or just store many historical records that are no longer modified by the application. Of course there is always question about advantages and disadvantages of that approach.
When you are working with two different data sources that contain the same data, you have to first think about synchronization. You have several options. Depending on the relational database vendor, you can leverage binary or transaction logs, which contain the history of SQL updates. This approach requires some middleware that reads logs and then puts data to Elasticsearch. You can always move the whole responsibility to the database side (trigger) or into Elasticsearch side (JDBC plugins). Continue reading “Elasticsearch with Spring Boot”

A Magic Around Spring Boot Externalized Configuration

There are some things I really like in Spring Boot, and one of them is an externalized configuration. Spring Boot allows you to configure your application in many ways. You have 17 levels of loading configuration properties into application. All of them are described in the 24th Chapter of Spring Boot documentation available here.

This article was inspired by some last talks with developers about problems with the configuration of their applications. They haven’t heard about some interesting features that may be used to make it more flexible and clear. Continue reading “A Magic Around Spring Boot Externalized Configuration”

Introduction to Spring Data Redis

Redis is an in-memory data structure store with optional durability, used as database, cache and message broker. Currently, it is the most most popular tool in the key/value stores category: https://db-engines.com/en/ranking/key-value+store. The easiest way to integrate your application with Redis is through Spring Data Redis. You can use Spring RedisTemplate directly for that or you might as well use Spring Data Redis repositories. There are some limitations when you integrate with Redis via Spring Data Redis repositories. They require at least Redis Server version 2.8.0 and do not work with transactions. Therefore you need to disable transaction support for RedisTemplate, which is leveraged by Redis repositories. Continue reading “Introduction to Spring Data Redis”

Kotlin Microservice with Spring Boot

You may find many examples of microservices built with Spring Boot on my blog, but the most of them is written in Java. With the rise in popularity of Kotlin language it is more often used with Spring Boot for building backend services. Starting with version 5 Spring Framework has introduced first-class support for Kotlin. In this article I’m going to show you example of microservice build with Kotlin and Spring Boot 2. I’ll describe some interesting features of Spring Boot, which can treated as a set of good practices when building backend, REST-based microservices.

1. Configuration and dependencies

To use Kotlin in your Maven project you have to include plugin kotlin-maven-plugin, and /src/main/kotlin, /src/test/kotlin directories to the build configuration. We will also set -Xjsr305 compiler flag to strict. This option is responsible for checking support for JSR-305 annotations (for example @NotNull annotation).

<build>
	<sourceDirectory>${project.basedir}/src/main/kotlin</sourceDirectory>
	<testSourceDirectory>${project.basedir}/src/test/kotlin</testSourceDirectory>
	<plugins>
		<plugin>
			<groupId>org.jetbrains.kotlin</groupId>
			<artifactId>kotlin-maven-plugin</artifactId>
			<configuration>
				<args>
					<arg>-Xjsr305=strict</arg>
				</args>
				<compilerPlugins>
					<plugin>spring</plugin>
				</compilerPlugins>
			</configuration>
			<dependencies>
				<dependency>
					<groupId>org.jetbrains.kotlin</groupId>
					<artifactId>kotlin-maven-allopen</artifactId>
					<version>${kotlin.version}</version>
				</dependency>
			</dependencies>
		</plugin>
	</plugins>
</build>

We should also include some core Kotlin libraries like kotlin-stdlib-jdk8 and kotlin-reflect. They are provided by default for a Kotlin project on start.spring.io. For REST-based applications you will also need Jackson library used for JSON serialization/deserialization. Of course, we have to include Spring starters for Web application together with Actuator responsible for providing management endpoints.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
	<groupId>com.fasterxml.jackson.module</groupId>
	<artifactId>jackson-module-kotlin</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-reflect</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-stdlib-jdk8</artifactId>
</dependency>

We use the latest stable version of Spring Boot with Kotlin 1.2.71

<parent>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-parent</artifactId>
	<version>2.1.2.RELEASE</version>
</parent>
<properties>
	<java.version>1.8</java.version>
	<kotlin.version>1.2.71</kotlin.version>
</properties>

2. Building application

Let’s begin from the basics. If you are familiar with Spring Boot and Java, the biggest difference is in the main class declaration. You will call runApplication method outside Spring Boot application class. The main class, the same as in Java, is annotated with @SpringBootApplication.

@SpringBootApplication
class SampleSpringKotlinMicroserviceApplication

fun main(args: Array<String>) {
    runApplication<SampleSpringKotlinMicroserviceApplication>(*args)
}

Our sample application is very simple. It exposes some REST endpoints providing CRUD operations for model object. Even at this fragment of code illustrating controller implementation you can see some nice Kotlin features. We may use shortened function declaration with inferred return type. Annotation @PathVariable does not require any arguments. The input parameter name is considered to be the same as variable name. Of course, we are using the same annotations as with Java. In Kotlin, every property declared as having non-null type must be initialized in the constructor. So, if you are initializing it using dependency injection it has to declared as lateinit. Here’s the implementation of PersonController.

@RestController
@RequestMapping("/persons")
class PersonController {

    @Autowired
    lateinit var repository: PersonRepository

    @GetMapping("/{id}")
    fun findById(@PathVariable id: Int): Person? = repository.findById(id)

    @GetMapping
    fun findAll(): List<Person> = repository.findAll()

    @PostMapping
    fun add(@RequestBody person: Person): Person = repository.save(person)

    @PutMapping
    fun update(@RequestBody person: Person): Person = repository.update(person)

    @DeleteMapping("/{id}")
    fun remove(@PathVariable id: Int): Boolean = repository.removeById(id)

}

Kotlin automatically generates getters and setters for class properties declared as var. Also if you declare model as a data class it generate equals, hashCode, and toString methods. The declaration of our model class Person is very concise as shown below.

data class Person(var id: Int?, var name: String, var age: Int, var gender: Gender)

I have implemented my own in-memory repository class. I use Kotlin extensions for manipulating list of elements. This built-in Kotlin feature is similar to Java streams, with the difference that you don’t have to perform any conversion between Collection and Stream.

@Repository
class PersonRepository {
    val persons: MutableList<Person> = ArrayList()

    fun findById(id: Int): Person? {
        return persons.singleOrNull { it.id == id }
    }

    fun findAll(): List<Person> {
        return persons
    }

    fun save(person: Person): Person {
        person.id = (persons.maxBy { it.id!! }?.id ?: 0) + 1
        persons.add(person)
        return person
    }

    fun update(person: Person): Person {
        val index = persons.indexOfFirst { it.id == person.id }
        if (index >= 0) {
            persons[index] = person
        }
        return person
    }

    fun removeById(id: Int): Boolean {
        return persons.removeIf { it.id == id }
    }

}

The sample application source code is available on GitHub in repository https://github.com/piomin/sample-spring-kotlin-microservice.git.

3. Enabling Actuator endpoints

Since we have already included Spring Boot starter with Actuator into the application code, we can take advantage of its production-ready features. Spring Boot Actuator gives you very powerful tools for monitoring and managing your apps. You can provide advanced healthchecks, info endpoints or send metrics to numerous monitoring systems like InfluxDB. After including Actuator artifacts the only thing we have to do is to enable all its endpoint for our application via HTTP.

management.endpoints.web.exposure.include: '*'

We can customize Actuator endpoints to provide more details about our app. A good practice is to expose information about version and git commit to info endpoint. As usual Spring Boot provides auto-configuration for such features, so the only thing we need to do is to include some Maven plugins to build configuration in pom.xml. The goal build-info set for spring-boot-maven-plugin forces it to generate properties file with basic information about version. The file is located in directory META-INF/build-info.properties. Plugin git-commit-id-plugin will generate git.properties file in the root directory.

<plugin>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-maven-plugin</artifactId>
	<executions>
		<execution>
			<goals>
				<goal>build-info</goal>
			</goals>
		</execution>
	</executions>
</plugin>
<plugin>
	<groupId>pl.project13.maven</groupId>
	<artifactId>git-commit-id-plugin</artifactId>
	<configuration>
		<failOnNoGitDirectory>false</failOnNoGitDirectory>
	</configuration>
</plugin>

Now you should just build your application using mvn clean install command and then run it.

$ java -jar target\sample-spring-kotlin-microservice-1.0-SNAPSHOT.jar

The info endpoint is available under address http://localhost:8080/actuator/info. It exposes all interesting information for us.

{
	"git":{
		"commit":{
			"time":"2019-01-14T16:20:31Z",
			"id":"f7cb437"
		},
		"branch":"master"
	},
	"build":{
		"version":"1.0-SNAPSHOT",
		"artifact":"sample-spring-kotlin-microservice",
		"name":"sample-spring-kotlin-microservice",
		"group":"pl.piomin.services",
		"time":"2019-01-15T09:18:48.836Z"
	}
}

4. Enabling API documentation

Build info and git properties may be easily injected into the application code. It can be useful in some cases. One of that case is if you have enabled auto-generated API documentation. The most popular tools using for it is Swagger. You can easily integrate Swagger2 with Spring Boot using SpringFox Swagger project. First, you need to include the following dependencies to your pom.xml.

<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.9.2</version>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger-ui</artifactId>
	<version>2.9.2</version>
</dependency>

Then, you should enable Swagger by annotating configuration class with @EnableSwagger2. Required informations are available inside beans BuildProperties and GitProperties. We just have to inject them into Swagger configuration class as shown below. We set them as optional to prevent from application startup failure in case they are not present on classpath.

@Configuration
@EnableSwagger2
class SwaggerConfig {

    @Autowired
    lateinit var build: Optional<BuildProperties>
    @Autowired
    lateinit var git: Optional<GitProperties>

    @Bean
    fun api(): Docket {
        var version = "1.0"
        if (build.isPresent && git.isPresent) {
            var buildInfo = build.get()
            var gitInfo = git.get()
            version = "${buildInfo.version}-${gitInfo.shortCommitId}-${gitInfo.branch}"
        }
        return Docket(DocumentationType.SWAGGER_2)
                .apiInfo(apiInfo(version))
                .select()
                .apis(RequestHandlerSelectors.any())
                .paths{ it.equals("/persons")}
                .build()
                .useDefaultResponseMessages(false)
                .forCodeGeneration(true)
    }

    @Bean
    fun uiConfig(): UiConfiguration {
        return UiConfiguration(java.lang.Boolean.TRUE, java.lang.Boolean.FALSE, 1, 1, ModelRendering.MODEL, java.lang.Boolean.FALSE, DocExpansion.LIST, java.lang.Boolean.FALSE, null, OperationsSorter.ALPHA, java.lang.Boolean.FALSE, TagsSorter.ALPHA, UiConfiguration.Constants.DEFAULT_SUBMIT_METHODS, null)
    }

    private fun apiInfo(version: String): ApiInfo {
        return ApiInfoBuilder()
                .title("API - Person Service")
                .description("Persons Management")
                .version(version)
                .build()
    }

}

The documentation is available under context path /swagger-ui.html. Besides API documentation is displays the full information about application version, git commit id and branch name.

kotlin-microservices-1.PNG

5. Choosing your app server

Spring Boot Web can be ran on three different embedded servers: Tomcat, Jetty or Undertow. By default it uses Tomcat. To change the default server you just need include the suitable Spring Boot starter and exclude spring-boot-starter-tomcat. The good practice may be to enable switching between servers during application build. You can achieve it by declaring Maven profiles as shown below.

<profiles>
	<profile>
		<id>tomcat</id>
		<activation>
			<activeByDefault>true</activeByDefault>
		</activation>
		<dependencies>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-web</artifactId>
			</dependency>
		</dependencies>
	</profile>
	<profile>
		<id>jetty</id>
		<dependencies>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-web</artifactId>
				<exclusions>
					<exclusion>
						<groupId>org.springframework.boot</groupId>
						<artifactId>spring-boot-starter-tomcat</artifactId>
					</exclusion>
				</exclusions>
			</dependency>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-jetty</artifactId>
			</dependency>
		</dependencies>
	</profile>
	<profile>
		<id>undertow</id>
		<dependencies>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-web</artifactId>
				<exclusions>
					<exclusion>
						<groupId>org.springframework.boot</groupId>
						<artifactId>spring-boot-starter-tomcat</artifactId>
					</exclusion>
				</exclusions>
			</dependency>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-undertow</artifactId>
			</dependency>
		</dependencies>
	</profile>
</profiles>

Now, if you would like to enable other server than Tomcat for your application you should activate the appropriate profile during Maven build.

$ mvn clean install -Pjetty

Conclusion

Development of microservices using Kotlin and Spring Boot is nice and simple. Basing on the sample application I have introduces the main Spring Boot features for Kotlin. I also described some good practices you may apply to your microservices when building it using Spring Boot and Kotlin. You can compare described approach with some other micro-frameworks used with Kotlin, for example Ktor described in one of my previous articles Kotlin Microservices with Ktor.

Spring Boot Autoscaler

One of more important reasons we are deciding to use such a tools like Kubernetes, Pivotal Cloud Foundry or HashiCorp’s Nomad is an availability of auto-scaling our applications. Of course those tools provides many other useful mechanisms, but we can implement auto-scaling by ourselves. At first glance it seems to be difficult, but assuming we use Spring Boot as a framework for building our applications and Jenkins as a CI server, it finally does not require a lot of work. Today, I’m going to show you how to implement such a solutions using the following frameworks/tools:

  • Spring Boot
  • Spring Boot Actuator
  • Spring Cloud Netflix Eureka
  • Jenkins CI

How it works?

Every Spring Boot application, which contains Spring Boot Actuator library can expose metrics under endpoint /actuator/metrics. There are many valuable metrics that gives you the detailed information about an application status. Some of them may be especially important when talking about autoscaling: JVM, CPU metrics, a number of running threads and a number of incoming HTTP requests. There is dedicated Jenkins pipeline responsible for monitoring application’s metrics by polling endpoint /actuator/metrics periodically. If any monitored metrics is below or above target range it runs new instance or shutdown a running instance of application using another Actuator endpoint /actuator/shutdown. Before that, it needs to fetch the current list of running instances of a single application in order to get an address of existing application selected for shutting down or the address of server with the smallest number of running instances for a new instance of application..

spring-autoscaler-1

After discussing an architecture of our system we may proceed to the development. Our application needs to meet some requirements: it has to expose metrics and endpoint for graceful shutdown, it needs to register in Eureka after after startup and deregister on shutdown, and finally it also should dynamically allocate running port randomly from the pool of free ports. Thanks to Spring Boot we may easily implement all these mechanisms if five minutes 🙂

Dynamic port allocation

Since it is possible to run many instances of application on a single machine we have to guarantee that there won’t be conflicts in port numbers. Fortunately, Spring Boot provides such mechanisms for an application. We just need to set port number to 0 inside application.yml file using server.port property. Because our application registers itself in eureka it also needs to send unique instanceId, which is by default generated as a concatenation of fields spring.cloud.client.hostname, spring.application.name and server.port.
Here’s current configuration of our sample application. I have changed the template of instanceId field by replacing number of port to randomly generated number.

spring:
  application:
    name: example-service
server:
  port: ${PORT:0}
eureka:
  instance:
    instanceId: ${spring.cloud.client.hostname}:${spring.application.name}:${random.int[1,999999]}

Enabling Actuator metrics

To enable Spring Boot Actuator we need to include the following dependency to pom.xml.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

We also have to enable exposure of actuator endpoints via HTTP API by setting property management.endpoints.web.exposure.include to '*'. Now, the list of all available metric names is available under context path /actuator/metrics, while detailed information for each metric under path /actuator/metrics/{metricName}.

Graceful shutdown

Besides metrics Spring Boot Actuator also provides endpoint for shutting down an application. However, in contrast to other endpoints this endpoint is not available by default. We have to set property management.endpoint.shutdown.enabled to true. After that we will be to stop our application by sending POST request to /actuator/shutdown endpoint.
This method of stopping application guarantees that service will unregister itself from Eureka server before shutdown.

Enabling Eureka discovery

Eureka is the most popular discovery server used for building microservices-based architecture with Spring Cloud. So, if you already have microservices and want to provide auto-scaling mechanisms for them, Eureka would be a natural choice. It contains IP address and port number of every registered instance of application. To enable Eureka on the client side you just need to include the following dependency to your pom.xml.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>

As I have mentioned before we also have to guarantee an uniqueness of instanceId send to Eureka server by client-side application. It has been described in the step “Dynamic port allocation”.
The next step is to create application with embedded Eureka server. To achieve it we first need to include the following dependency into pom.xml.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>

The main class should be annotated with @EnableEurekaServer.

@SpringBootApplication
@EnableEurekaServer
public class DiscoveryApp {

    public static void main(String[] args) {
        new SpringApplicationBuilder(DiscoveryApp.class).run(args);
    }

}

Client-side applications by default tries to connect with Eureka server on localhost under port 8761. We only need single, standalone Eureka node, so we will disable registration and attempts to fetching list of services form another instances of server.

spring:
  application:
    name: discovery-service
server:
  port: ${PORT:8761}
eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://localhost:8761/eureka/

The tests of the sample autoscaling system will be performed using Docker containers, so we need to prepare and build image with Eureka server. Here’s Dockerfile with image definition. It can be built using command docker build -t piomin/discovery-server:2.0 ..

FROM openjdk:8-jre-alpine
ENV APP_FILE discovery-service-1.0-SNAPSHOT.jar
ENV APP_HOME /usr/apps
EXPOSE 8761
COPY target/$APP_FILE $APP_HOME/
WORKDIR $APP_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

Building Jenkins pipeline for autoscaling

The first step is to prepare Jenkins pipeline responsible for autoscaling. We will create Jenkins Declarative Pipeline, which runs every minute. Periodical execution may be configured with the triggers directive, that defines the automated ways in which the pipeline should be re-triggered. Our pipeline will communicate with Eureka server and metrics endpoints exposed by every microservice using Spring Boot Actuator.
The test service name is EXAMPLE-SERVICE, which is equal to value (big letters) of property spring.application.name defined inside application.yml file. The monitored metric is the number of HTTP listener threads running on Tomcat container. These threads are responsible for processing incoming HTTP requests.

pipeline {
    agent any
    triggers {
        cron('* * * * *')
    }
    environment {
        SERVICE_NAME = "EXAMPLE-SERVICE"
        METRICS_ENDPOINT = "/actuator/metrics/tomcat.threads.busy?tag=name:http-nio-auto-1"
        SHUTDOWN_ENDPOINT = "/actuator/shutdown"
    }
    stages { ... }
}

Integrating Jenkins pipeline with Eureka

The first stage of our pipeline is responsible for fetching list of services registered in service discovery server. Eureka exposes HTTP API with several endpoints. One of them is GET /eureka/apps/{serviceName}, which returns list of all instances of application with given name. We are saving the number of running instances and the URL of metrics endpoint of every single instance. These values would be accessed during next stages of pipeline.
Here’s the fragment of pipeline responsible for fetching list of running instances of application. The name of stage is Calculate. We use HTTP Request Plugin for HTTP connections.

stage('Calculate') {
	steps {
		script {
			def response = httpRequest "http://192.168.99.100:8761/eureka/apps/${env.SERVICE_NAME}"
			def app = printXml(response.content)
			def index = 0
			env["INSTANCE_COUNT"] = app.instance.size()
			app.instance.each {
				if (it.status == 'UP') {
					def address = "http://${it.ipAddr}:${it.port}"
					env["INSTANCE_${index++}"] = address 
				}
			}
		}
	}
}

@NonCPS
def printXml(String text) {
    return new XmlSlurper(false, false).parseText(text)
}

Here’s a sample response from Eureka API for our microservice. The response content type is XML.

spring-autoscaler-2

Integrating Jenkins pipeline with Spring Boot Actuator metrics

Spring Boot Actuator exposes endpoint with metrics, which allows to find metric by name and optionally by tag. In the fragment of pipeline visible below I’m trying to find the instance with metric below or above a defined threshold. If there is such an instance we stop the loop in order to proceed to the next stage, which performs scaling down or up. The ip addresses of running applications are taken from pipeline environment variable with prefix INSTANCE_, which has been saved in the previous stage.

stage('Metrics') {
	steps {
		script {
			def count = env.INSTANCE_COUNT
			for(def i=0; i<count; i++) {
				def ip = env["INSTANCE_${i}"] + env.METRICS_ENDPOINT
				if (ip == null)
					break;
				def response = httpRequest ip
				def objRes = printJson(response.content)
				env.SCALE_TYPE = returnScaleType(objRes)
				if (env.SCALE_TYPE != "NONE")
					break
			}
		}
	}
}

@NonCPS
def printJson(String text) {
    return new JsonSlurper().parseText(text)
}

def returnScaleType(objRes) {
def value = objRes.measurements[0].value
if (value.toInteger() > 100)
		return "UP"
else if (value.toInteger() < 20)
		return "DOWN"
else
		return "NONE"
}

Shutdown application instance

In the last stage of our pipeline we will shutdown the running instance or start new instance depending on the result saved in the previous stage. Shutdown may be easily performed by calling Spring Boot Actuator endpoint. In the following fragment of pipeline we pick the instance returned by Eureka as first. Then we send POST request to that ip address.
If we need to scale up our application we call another pipeline responsible for build fat JAR and launch it on our machine.

stage('Scaling') {
	steps {
		script {
			if (env.SCALE_TYPE == 'DOWN') {
				def ip = env["INSTANCE_0"] + env.SHUTDOWN_ENDPOINT
				httpRequest url:ip, contentType:'APPLICATION_JSON', httpMode:'POST'
			} else if (env.SCALE_TYPE == 'UP') {
				build job:'spring-boot-run-pipeline'
			}
			currentBuild.description = env.SCALE_TYPE
		}
	}
}

Here’s a full definition of our pipeline spring-boot-run-pipeline responsible for starting new instance of application. It clones the repository with application source code, builds binaries using Maven commands, and finally runs the application using java -jar command passing address of Eureka server as a parameter.

pipeline {
    agent any
    tools {
        maven 'M3'
    }
    stages {
        stage('Checkout') {
            steps {
                git url: 'https://github.com/piomin/sample-spring-boot-autoscaler.git', credentialsId: 'github-piomin', branch: 'master'
            }
        }
        stage('Build') {
            steps {
                dir('example-service') {
                    sh 'mvn clean package'
                }
            }
        }
        stage('Run') {
            steps {
                dir('example-service') {
                    sh 'nohup java -jar -DEUREKA_URL=http://192.168.99.100:8761/eureka target/example-service-1.0-SNAPSHOT.jar 1>/dev/null 2>logs/runlog &'
                }
            }
        }
    }
}

Remote extension

The algorithm discussed in the previous sections will work fine only for microservices launched on the single machine. If we would like to extend it to work with many machines, we will have to modify our architecture as shown below. Each machine has Jenkins agent running and communicating with Jenkins master. If we would like to start new instance of microservices on the selected machine, we have to run pipeline using agent running on that machine. This agent is responsible only for building application from source code and launching it on the target machine. The shutdown of instance is still performed just by calling HTTP endpoint.

spring-autoscaler-3

You can find more information about running Jenkins agents and connecting them with Jenkins master via JNLP protocol in my article Jenkins nodes on Docker containers. Assuming we have successfully launched some agents on the target machines we need to parametrize our pipelines in order to be able to select agent (and therefore the target machine) dynamically.
When we are scaling up our application we have to pass agent label to the downstream pipeline.

build job:'spring-boot-run-pipeline', parameters:[string(name: 'agent', value:"slave-1")]

The calling pipeline will be ran by agent labelled with given parameter.

pipeline {
    agent {
        label "${params.agent}"
    }
    stages { ... }
}

If we have more than one agent connected to the master node we can map their addresses into the labels. Thanks to that you would be able to map IP address of microservice instance fetched from Eureka to the target machine with Jenkins agent.

pipeline {
    agent any
    triggers {
        cron('* * * * *')
    }
    environment {
        SERVICE_NAME = "EXAMPLE-SERVICE"
        METRICS_ENDPOINT = "/actuator/metrics/tomcat.threads.busy?tag=name:http-nio-auto-1"
        SHUTDOWN_ENDPOINT = "/actuator/shutdown"
        AGENT_192.168.99.102 = "slave-1"
        AGENT_192.168.99.103 = "slave-2"
    }
    stages { ... }
}

Summary

In this article I have demonstrated how to use Spring Boot Actuator metrics in order to scale up/scale down your Spring Boot application. Using basic mechanisms provided by Spring Boot together with Spring Cloud Netflix Eureka and Jenkins you can implement auto-scaling for your applications without getting any other third-party tools. The case described in this article assumes using Jenkins agents on the remote machines to launch there new instance of application, but you may as well use a tool like Ansible for that. If you would decide to run Ansible playbooks from Jenkins you will not have to launch Jenkins agents on remote machines. The source code with sample applications is available on GitHub: https://github.com/piomin/sample-spring-boot-autoscaler.git.