Performance Comparison Between Spring Boot and Micronaut

Today we will compare two frameworks used for building microservices on the JVM: Spring Boot and Micronaut. First of them, Spring Boot is currently the most popular and opinionated framework in the JVM world. On the other side of the barrier is staying Micronaut, quickly gaining popularity framework especially designed for building serverless functions or low memory-footprint microservices. We will be comparing version 2.1.4 of Spring Boot with 1.0.0.RC1 of Micronaut. The comparison criteria are:

  • memory usage (heap and non-heap)
  • the size in MB of generated fat JAR file
  • the application startup time
  • the performance of application, in the meaning of average response time from the REST endpoint during sample load testing

To make our test relevant we will gather the statistics for the two almost identical applications. Of course, the only difference between will be in the frameworks we used for building it. Our sample application is very simple. It exposes some endpoints with in-memory CRUD operations for a single entity. It also exposes info and health endpoints, and also Swagger API with all endpoints auto-generated documentation.

The sample application performance will be tested on JDK 11. We will use Yourkit for profiling and monitoring memory usage after startup and during load testing, and Gatling for building performance API tests. First, let’s perform a short overview of our sample application.

Source Code

I have implemented very simple in-memory repository bean that add new object into the list and provides find method for searching object by id generated during add method.

public class PersonRepository {

    List<Person> persons = new ArrayList<>();

    public Person add(Person person) {
        person.setId(persons.size()+1);
        persons.add(person);
        return person;
    }

    public Person findById(Long id) {
        Optional<Person> person = persons.stream().filter(a -> a.getId().equals(id)).findFirst();
        if (person.isPresent())
            return person.get();
        else
            return null;
    }

    public List<Person> findAll() {
        return persons;
    }

}

The repository bean is injected into controller. Controller exposes two HTTP methods. First of them (POST) is used for adding new object, while the second (GET) for searching it by id. Here’s controller implementation inside Spring Boot application:

@RestController
@RequestMapping("/persons")
public class PersonsController {

    private static final Logger LOGGER = LoggerFactory.getLogger(PersonsController.class);

    @Autowired
    PersonRepository repository;

    @PostMapping
    public Person add(@RequestBody Person person) {
        LOGGER.info("Person add: {}", person);
        return repository.add(person);
    }

    @GetMapping("/{id}")
    public Person findById(@PathVariable("id") Long id) {
        LOGGER.info("Person find: id={}", id);
        return repository.findById(id);
    }

    @GetMapping
    public List<Person> findAll() {
        LOGGER.info("Person find");
        return repository.findAll();
    }

}

Here’s the similar implementation for Micronaut:

To implement REST endpoints, healthcheck and Swagger API we need to include some dependencies. Here’s the list of dependencies for Spring Boot:

<parent>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-parent</artifactId>
	<version>2.1.4.RELEASE</version>
</parent>
<groupId>pl.piomin.services</groupId>
<artifactId>sample-app</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
	<java.version>11</java.version>
	<maven.compiler.source>${java.version}</maven.compiler.source>
	<maven.compiler.target>${java.version}</maven.compiler.target>
</properties>
<dependencies>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-web</artifactId>
	</dependency>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-actuator</artifactId>
	</dependency>
	<dependency>
		<groupId>io.springfox</groupId>
		<artifactId>springfox-swagger2</artifactId>
		<version>2.9.2</version>
	</dependency>
</dependencies>

Here’s the similar list of dependencies required for Micronaut:

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-http-server-netty</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-inject</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-runtime</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-management</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-inject-java</artifactId>
	<scope>provided</scope>
</dependency>
<dependency>
	<groupId>io.swagger.core.v3</groupId>
	<artifactId>swagger-annotations</artifactId>
</dependency>
<dependency>
	<groupId>ch.qos.logback</groupId>
	<artifactId>logback-classic</artifactId>
	<version>1.2.3</version>
	<scope>runtime</scope>
</dependency>

I also had to provide some additional configuration in application.yml to enable Swagger and healthchecks:

micronaut:
  router:
    static-resources:
      swagger:
        paths: classpath:META-INF/swagger
        mapping: /swagger/**
endpoints:
  info:
    enabled: true
    sensitive: false

Starting Application

First, let’s start our application. I use Intellij for that. The sample application built basing on Spring Boot starts around 6-7 seconds. The following start take up exactly 6.344.

performance-1

The similar application built on top of Micronaut starts around 3-4 seconds. The following start take up exactly 3.463 as shown below. However, I had to disable environment deduction when I started application behind corporate proxy by setting VM option -Dmicronaut.cloud.platform=BARE_METAL. I think that startup time for both applications is really ok.

performance-7

Here’s the graph that illustrates difference in startup time between Spring Boot and Micronaut.

performance-sum-1

Building Application

We will also check out the size of application fat JAR. To do that you should build the application using mvn clean install command. For Spring Boot we used two standard starters: Web, Actuator, and library Swagger SpringFox. As a result of this there are more than 50 libraries included. Of course, we could made some exclusions or do not use starters, but I have chosen the simplest way to built an application. The fat JAR has a size of 24.2 MB.
The similar application based on Micronaut is much smaller. The fat JAR has a size of 12.1 MB. I have included more libraries in pom.xml, and finally there were 37 libraries included.
Spring Boot includes more libraries on the standard configuration, but on the other hand it has more features and auto-configuration than Micronaut.

Here’s the graph that illustrates difference in size of target JAR between Spring Boot and Micronaut.

performance-sum-2

Memory Management

Just after startup Spring Boot application has allocated 305 MB for heap and 81 MB for non-heap. I haven’t set any memory limit using Xmx or any other option. In heap, 8 MB has been consumed by old gen, 60 MB by eden space, and 15 MB by survivor. Most of non-heap were consumed by metaspace – 52 MB.
After running performance load test heap allocation increased to 369 MB, and non-heap to 87 MB. Here’s the screen that illustrates CPU and RAM usage before and during performance test.

performance-2

Just after startup Micronaut application has allocated 254 MB for heap and 51 MB for non-heap. I haven’t set any memory limit using Xmx or any other option – the same as for Spring Boot application. In heap, 2.5 MB has been consumed by old gen, 20 MB by eden space, and 7 MB by survivor. Most of non-heap were consumed by metaspace – 35 MB.
After running performance load test heap allocation has not changed, and non-heap increased to 63 MB. Here’s the screen that illustrates CPU and RAM usage before and during performance test.

performance-8

Here’s heap memory usage comparison between Spring Boot and Micronaut just after startup.

performance-sum-3

And non-heap.

performance-sum-4

Performance Tests

I used Gatling for building performance load tests. This tool allows you to create test scenarios in Scala. We are generating 40k sample requests sent simultaneously by 20 threads. Here’s the test implemented for POST method.

class SimpleTest extends Simulation {

  val scn = scenario("AddPerson").repeat(2000, "n") {
    exec(http("Persons-POST")
      .post("http://localhost:8080/persons")
      .header("Content-Type", "application/json")
      .body(StringBody("""{"name":"Test${n}","gender":"MALE","age":100}"""))
      .check(status.is(200)))
  }

  setUp(scn.inject(atOnceUsers(20))).maxDuration(FiniteDuration.apply(10, TimeUnit.MINUTES))

}

Here’s the test implemented for GET method.

class SimpleTest2 extends Simulation {

  val scn = scenario("GetPerson").repeat(2000, "n") {
    exec(http("Persons-GET")
      .get("http://localhost:8080/persons/${n}")
      .check(status.is(200)))
  }

  setUp(scn.inject(atOnceUsers(20))).maxDuration(FiniteDuration.apply(10, TimeUnit.MINUTES))

}

The result of performance test for POST /persons method is visible on the picture below. An average number of requests processed during one second is 1176.

performance-3

The following screen shows the histogram with response time percentiles over time.

performance-5

The result of performance test for GET /persons/{id} method is visible on the picture below. An average number of requests processed during one second is 1428.

performance-4

The following screen shows the histogram with response time percentiles over time.

performance-6

Now, we are running the same Gatling load test for Micronaut application. The result of performance test for POST /persons method is visible on the picture below. An average number of requests processed during one second is 1290.

performance-9

The following screen shows the histogram with response time percentiles over time.

performance-11

The result of performance test for GET /persons/{id} method is visible on the picture below. An average number of requests processed during one second is 1538.

performance-10

The following screen shows the histogram with response time percentiles over time.

performance-12

There are no big difference in processing time between Spring Boot and Micronaut. It’s possible that small differences in time are not related with framework, but rather with base server. By default, Spring Boot uses Tomcat, while Micronaut uses Netty.

Advertisements

The Future of Spring Cloud Microservices After Netflix Era

If somebody would ask you about Spring Cloud, the first thing that comes into your mind will probably be Netflix OSS support. Support for such tools like Eureka, Zuul or Ribbon is provided not only by Spring, but also by some other popular frameworks used for building microservices architecture like Apache Camel, Vert.x or Micronaut. Currently, Spring Cloud Netflix is the most popular project being a part of Spring Cloud. It has around 3.2k stars on GitHub, while the second best has around 1.4k. Therefore, it is quite surprising that Pivotal has announced that most of Spring Cloud Netflix modules are entering maintenance mode. You can read more about in the post published on the Spring blog by Spencer Gibb https://spring.io/blog/2018/12/12/spring-cloud-greenwich-rc1-available-now.
Ok, let’s perform a short summary of that changes. Starting from Spring Cloud Greenwich Release Train Netflix OSS Archaius, Hystrix, Ribbon and Zuul are entering maintenance mode. It means that there won’t be any new features to these modules, and Spring Cloud team will perform only some bug fixes and fix security issues. The maintenance mode does not include Eureka module, which is still supported.
The explanation of these changes is pretty easy. Especially for two of them. Currently, Ribbon and Hystrix are not actively developed by Netflix, although they are still deployed at scale. Additionally, Hystrix has been already superseded by the new solution for telemetry called Atlas. The situation with Zuul is not such obvious. Netflix has announced open sourcing of Zuul 2 on May 2018. New version of Zuul gateway is built on top of Netty server, and includes some improvements and new features. You can read more about them on Netflix blog https://medium.com/netflix-techblog/open-sourcing-zuul-2-82ea476cb2b3. Despite that decision taken by Netflix cloud team, Spring Cloud team has abandoned development of Zuul module. I can only guess that it was caused by the earlier decision of starting new module inside Spring Cloud family dedicated especially for being an API gateway in the microservices-based architecture – Spring Cloud Gateway.
The last piece of that puzzle is Eureka – a discovery server. It is still developed, but the situation is also interesting here. I will describe that in the next part of this article.
All these news have inspired me to take a look on the current situation of Spring Cloud and discuss some potential changes in the future. As an author of Mastering Spring Cloud book I’m trying to follow an evolution of that project to stay current. It’s also worth mentioning that we are have microservices inside my organization – of course built on top of Spring Boot and Spring Cloud using such modules like Eureka, Zuul and Ribbon. In this article, I would like to discuss some potential … for such popular microservices patterns like service discovery, distributed configuration, client-side load balancing and API gateway.

Service Discovery

Eureka is the only one important Spring Cloud Netflix module that has not been moved to the maintenance mode. However, I would not say that it is actively developed. The last commit in the repository maintained by Netflix is from 11th January. Some time ago they have started working on Eureka 2, but it seems these works has been abandoned or they just have postponed open sourcing the newest version code to the future. Here https://github.com/Netflix/eureka/tree/2.x you can find an interesting comment about it: “The 2.x branch is currently frozen as we have had some internal changes w.r.t. to eureka2, and do not have any time lines for open sourcing of the new changes.”. So, we have two possibilities. Maybe, Netflix will decide to open source those internal changes as a version 2 of Eureka server. It is worth to remember that Eureka is a battle proven solution used at Scale by Netflix directly, and probably by many other organizations through Spring Cloud.
The second option is to choose another discovery server. Currently, Spring Cloud supports discovery based on various tools: ZooKeeper, Consul, Alibaba Nacos, Kubernetes. In fact, Kubernetes is based on etcd. Support for etcd is also being developed by Spring Cloud, but it is still in the incubation stage, and it is not known if it will be ever promoted to the official release train. In my opinion, there one leader amongst these solutions – HashiCorp’s Consul.
Consul is now described as a service mesh solution providing a full featured control plane with service discovery, configuration, and segmentation functionality. It can be used as a discovery server or a key/value store in your microservices-based architecture. The integration with Consul is implemented by Spring Cloud Consul project. To enable Consul client for your application you just need to include the following dependency to your Maven pom.xml:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-consul-discovery</artifactId>
</dependency>

By default, Spring tries to connect with Consul on the address localhost:8500. If you need to override this address you should set the appropriate properties inside application.yml:

spring:  
  cloud:
    consul:
      host: 192.168.99.100
      port: 8500

You can easily test this solution with local instance of Consul started as the Docker container:

$ docker run -d --name consul -p 8500:8500 consul

As you see Consul discovery implementation with Spring Cloud is very easy – the same as for Eureka. Consul has one undoubted advantage over Eureka – it is continuously maintained and developed by HashiCorp. Its popularity is growing fast. It is a part of biggest HashiCorp ecosystem, which includes Vault, Nomad and Terraform. In contrast to Eureka, Consul can be used not only for service discovery, but also as a configuration server in your microservices-based architecture.

Distributed Configuration

Netflix Archaius is an interesting solution for managing externalized configuration in microservices architecture. Although it offers some interesting features like dynamic and typed properties or support for dynamic data sources such as URLs, JDBC or AWS DynamoDB, Spring Cloud has also decided to move it to the maintenance mode. However, a popularity of Spring Cloud Archaius was limited, due to existence of similar project fully created by Pivotal team and community – Spring Cloud Config. Spring Cloud Config supports multiple source repositories including Git, JDBC, Vault or simple files. You can find many examples of using this project for providing distributed configuration for your microservices in my previous posts. Today, I’m not going to talk about it. We will discuss an alternative solution – also supported by Spring Cloud.
As I have mentioned in the end of previous section Consul can also be used as a configuration server. If you use Eureka as a discovery server, using Spring Cloud Config as a configuration server is a natural choice, because Eureka simply does not provide such features. This is not the case if you decide to use Consul. Now it makes sense to choose between two solutions: Spring Cloud Consul Config and Spring Cloud Config. Of course, both of them have their advantages and disadvantages. For example, you can easily build a cluster with Consul nodes, while with Spring Cloud Config you must rely on external discovery.
Now, let’s see how to use Spring Cloud Consul for managing external configuration in your application. To enable it on the application side you just need to include the following dependency to your Maven pom.xml:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-consul-config</artifactId>
</dependency>

The same as for service discovery, If you would like to override some default client settings you need to set properties spring.cloud.consul.*. However, such a configuration must provided inside bootstrap.yml.

spring:  
  application:
    name: callme-service
  cloud:
    consul:
      host: 192.168.99.100
      port: 8500

The name of property source created on Consul should be the same as the application name provided in bootstrap.yml inside config folder. You should create key server.port with value 0, to force Spring Boot to generate listening port number randomly. Supposing you need to set application default listening port you should the following configuration.

spring-cloud-1

When enabling dynamic port number generation you also need to override application instance id to be unique across a single machine. These feature is required if you are running multiple instances of a single service in the same machine. We will do it for callme-service, so we need to override the the property spring.cloud.consul.discovery.instance-id with our value as shown below.

spring-cloud-4

Then, you should see the following log on your application startup.

spring-cloud-3

API Gateway

The successor of Spring Cloud Netflix Zuul is Spring Cloud Gateway. This project has been started around two years ago, and now is the second most popular Spring Cloud project with 1.4k stars on GitHub. It provides an API Gateway built on top of the Spring Ecosystem, including: Spring 5, Spring Boot 2 and Project Reactor. It is running on Netty, and does not work with traditional servlet container like Tomcat or Jetty. It allows to define routes, predicates and filters.
API gateway, the same as every Spring Cloud microservice may be easily integrated with service discovery based on Consul. We just need to include the appropriate dependencies inside pom.xml. We will use the latest development version of Spring Cloud libraries – 2.2.0.BUILD-SNAPSHOT. Here’s the list of required dependencies:

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-consul-discovery</artifactId>
	<version>2.2.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-consul-config</artifactId>
	<version>2.2.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-gateway</artifactId>
	<version>2.2.0.BUILD-SNAPSHOT</version>
</dependency>

The gateway configuration will also be served by Consul. Because, we have pretty more configuration settings than for sample microservices, we will store it as YAML file. To achieve that we should create YAML file available under path /config/gateway-service/data on Consul Key/Value. The configuration visible below enables service discovery integration and defines routes to the downstream services. Each route contains name of the target service under which it is registered in service discovery, matching path and rewrite path used for call endpoint exposed by the downstream service. The following configuration is load on startup by our API gateway:

spring:
  cloud:
    gateway:
      discovery:
        locator:
          enabled: true
      routes:
        - id: caller-service
          uri: lb://caller-service
          predicates:
            - Path=/caller/**
          filters:
            - RewritePath=/caller/(?.*), /$\{path}
        - id: callme-service
          uri: lb://callme-service
          predicates:
            - Path=/callme/**
          filters:
            - RewritePath=/callme/(?.*), /$\{path}

Here’s the same configuration visible on Consul.

spring-cloud-2

The last step is to force gateway-service to read configuration stored as YAML. To do that we need to set property spring.cloud.consul.config.format to YAML. Here’s the full configuration provided inside bootstrap.yml.

spring:
  application:
    name: gateway-service
  cloud:
    consul:
      host: 192.168.99.100
      config:
        format: YAML

Client-side Load Balancer

In version 2.2.0.BUILD-SNAPSHOT of Spring Cloud Commons Ribbon is still the main auto-configured load balancer for HTTP clients. Although Spring Cloud team has announced that Spring Cloud Load Balancer will be the successor of Ribbon, we currently won’t find many informations about that project in documentation and on the web. We may expect that the same as for Netflix Ribbon any configuration will be transparent for us, especially if we use discovery client. Currently, spring-cloud-loadbalancer module is a part of Spring Cloud Commons project. You may include it directly to your application by declaring the following dependency in pom.xml:

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-loadbalancer</artifactId>
	<version>2.2.0.BUILD-SNAPSHOT</version>
</dependency>

For the test purposes it is worth to exclude some Netflix modules included together with <code>spring-cloud-starter-consul-discovery</code> starter. Now, we are sure that Ribbon is not used in background as load balancer. Here’s the list of exclusions I set for my sample application:

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-consul-discovery</artifactId>
	<version>2.2.0.BUILD-SNAPSHOT</version>
	<exclusions>
		<exclusion>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-netflix-core</artifactId>
		</exclusion>
		<exclusion>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-starter-netflix-archaius</artifactId>
		</exclusion>
		<exclusion>
			<groupId>com.netflix.ribbon</groupId>
			<artifactId>ribbon</artifactId>
		</exclusion>
		<exclusion>
			<groupId>com.netflix.ribbon</groupId>
			<artifactId>ribbon-core</artifactId>
		</exclusion>
		<exclusion>
			<groupId>com.netflix.ribbon</groupId>
			<artifactId>ribbon-httpclient</artifactId>
		</exclusion>
		<exclusion>
			<groupId>com.netflix.ribbon</groupId>
			<artifactId>ribbon-loadbalancer</artifactId>
		</exclusion>
	</exclusions>
</dependency>

Treat my example just as a playground. Certainly the targeted approach is going to be much easier. First, we should annotate our main or configuration class with @LoadBalancerClient. As always, the name of client should be same as the name of target service registered in registry. The annotation should also contain the class with client configuration.

@SpringBootApplication
@LoadBalancerClients({
	@LoadBalancerClient(name = "callme-service", configuration = ClientConfiguration.class)
})
public class CallerApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallerApplication.class, args);
	}

	@Bean
	RestTemplate template() {
		return new RestTemplate();
	}

}

Here’s our load balancer configuration class. It contains the declaration of a single @Bean. I have chosen RoundRobinLoadBalancer type.

public class ClientConfiguration {

	@Bean
	public RoundRobinLoadBalancer roundRobinContextLoadBalancer(LoadBalancerClientFactory clientFactory, Environment env) {
		String serviceId = clientFactory.getName(env);
		return new RoundRobinLoadBalancer(serviceId, clientFactory
				.getLazyProvider(serviceId, ServiceInstanceSupplier.class), -1);
	}

}

Finally, here’s the implementation of caller-service controller. It uses LoadBalancerClientFactory directly to find list of available instances of callme-service. Then it selects a single instance, get its host and port, and sets in as an target URL.

@RestController
@RequestMapping("/caller")
public class CallerController {

	@Autowired
	Environment environment;
	@Autowired
	RestTemplate template;
	@Autowired
	LoadBalancerClientFactory clientFactory;

	@GetMapping
	public String call() {
		RoundRobinLoadBalancer lb = clientFactory.getInstance("callme-service", RoundRobinLoadBalancer.class);
		ServiceInstance instance = lb.choose().block().getServer();
		String url = "http://" + instance.getHost() + ":" + instance.getPort() + "/callme";
		String callmeResponse = template.getForObject(url, String.class);
		return "I'm Caller running on port " + environment.getProperty("local.server.port")
				+ " calling-> " + callmeResponse;
	}

}

Summary

The following picture illustrates the architecture of sample system. We have two instances of callme-service, a single instance of caller-service, which uses Spring Cloud Balancer to find the list of available instances of callme-service. The ports are generated dynamically. The API gateway is hiding the complexity of our system from external client. It is available on port 8080, and is forwarding requests to the downstream basing on request context path.

spring-cloud-1.png

After starting, all the microservices you should be registered on your Consul node.

spring-cloud-7

Now, you can try to endpoint exposed by caller-service through gateway: http://localhost:8080/caller. You should something like that:

spring-cloud-6

The sample application source code is available on GitHub in repository https://github.com/piomin/sample-spring-cloud-microservices-future.git.

Redis in Microservices Architecture

Redis can be widely used in microservices architecture. It is probably one of the few popular software solutions that may be leveraged by your application in such many different ways. Depending on the requirements it can acts as a primary database, cache, message broker. While it is also a key/value store we can use it as a configuration server or discovery server in your microservices architecture. Although it is usually defined as an in-memory data structure, we can also run it in persistent mode.
Today, I’m going to show you some examples of using Redis with microservices built on top of Spring Boot and Spring Cloud frameworks. These application will communicate between each other asynchronously using Redis Pub/Sub, using Redis as a cache or primary database, and finally used Redis as a configuration server. Here’s the picture that illustrates described architecture.

redis-micro-2.png

Redis as Configuration Server

If have already built microservices with Spring Cloud, you probably have a touch with Spring Cloud Config. It is responsible for providing distributed configuration pattern for microservices. Unfortunately Spring Cloud Config does not support Redis as a property sources backend repository. That’s why I decided to fork Spring Cloud Config project and implement this feature. I hope my implementation will soon be included into official Spring Cloud release, but for now you may use my forked repo to run it. It is available on my GitHub account piomin/spring-cloud-config. How to use it? Very simple. Let’s see.
The current SNAPSHOT version of Spring Boot is 2.2.0.BUILD-SNAPSHOT, the same as for Spring Cloud Config. While building Spring Cloud Config Server we need to include only those two dependencies as shown below.

<parent>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-parent</artifactId>
	<version>2.2.0.BUILD-SNAPSHOT</version>
</parent>
<artifactId>config-service</artifactId>
<groupId>pl.piomin.services</groupId>
<version>1.0-SNAPSHOT</version>

<dependencies>
	<dependency>
		<groupId>org.springframework.cloud</groupId>
		<artifactId>spring-cloud-config-server</artifactId>
		<version>2.2.0.BUILD-SNAPSHOT</version>
	</dependency>
</dependencies>

By default, Spring Cloud Config Server uses Git repository backend. We need to activate redis profile to force it using Redis as a backend. If your Redis instance listens on another address than localhost:6379 you need to overwrite auto-configured connection settings with spring.redis.* properties. Here’s our bootstrap.yml file.

spring:
  application:
    name: config-service
  profiles:
    active: redis
  redis:
    host: 192.168.99.100

The application main class should be annotated with @EnableConfigServer.

@SpringBootApplication
@EnableConfigServer
public class ConfigApplication {

	public static void main(String[] args) {
		new SpringApplicationBuilder(ConfigApplication.class).run(args);
	}

}

Before running the application we need to start Redis instance. Here’s the command that runs it as a Docker container and exposes on port 6379.

$ docker run -d --name redis -p 6379:6379 redis

The configuration for every application has to be available under the key ${spring.application.name} or ${spring.application.name}-${spring.profiles.active[n]}.
We have to create hash with the keys corresponding to the names of configuration properties. Our sample application driver-management uses three configuration properties: server.port for setting HTTP listening port, spring.redis.host for changing default Redis address used as a message broker and database, and sample.topic.name for setting name of topic used for asynchronous communication between our microservices. Here’s the structure of Redis hash created for driver-management visualized with RDBTools.

redis-micro-3

That visualization is an equivalent of running Redis CLI command HGETALL that return all the fields and values in a hash.

>> HGETALL driver-management
{
  "server.port": "8100",
  "sample.topic.name": "trips",
  "spring.redis.host": "192.168.99.100"
}

After setting keys and values in Redis and running Spring Cloud Config Server with active redis profile, we need to enable distributed configuration feature on the client side. To do that we just need include spring-cloud-starter-config dependency to pom.xml of every microservice.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-config</artifactId>
</dependency>

We use the newest stable version of Spring Cloud.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Greenwich.SR1</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

The name of application is taken from property spring.application.name on startup, so we need to provide the following bootstrap.yml file.

spring:
  application:
    name: driver-management

Redis as Message Broker

Now we can proceed to the second use case of Redis in microservices-based architecture – message broker. We will implement a typical asynchronous system, which is visible on the picture below. Microservice trip-management send notification to Redis Pub/Sub after creating new trip and after finishing current trip. The notification is received by both driver-management and passenger-management, which are subscribed to the particular channel.

micro-redis-1.png

Our application is very simple. We just need to add the following dependencies in order to provide REST API and integrate with Redis Pub/Sub.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

We should register bean with channel name and publisher. TripPublisher is responsible for sending messages to the target topic.

@Configuration
public class TripConfiguration {

	@Autowired
	RedisTemplate<?, ?> redisTemplate;

	@Bean
	TripPublisher redisPublisher() {
		return new TripPublisher(redisTemplate, topic());
	}

	@Bean
	ChannelTopic topic() {
		return new ChannelTopic("trips");
	}

}

TripPublisher uses RedisTemplate for sending messages to the topic. Before sending it converts every message from object to JSON string using Jackson2JsonRedisSerializer.

public class TripPublisher {

	private static final Logger LOGGER = LoggerFactory.getLogger(TripPublisher.class);

	RedisTemplate<?, ?> redisTemplate;
	ChannelTopic topic;

	public TripPublisher(RedisTemplate<?, ?> redisTemplate, ChannelTopic topic) {
		this.redisTemplate = redisTemplate;
		this.redisTemplate.setValueSerializer(new Jackson2JsonRedisSerializer(Trip.class));
		this.topic = topic;
	}

	public void publish(Trip trip) throws JsonProcessingException {
		LOGGER.info("Sending: {}", trip);
		redisTemplate.convertAndSend(topic.getTopic(), trip);
	}

}

We have already implemented the logic on the publisher side. Now, we can proceed to the implementation on subscriber sides. We have two microservices driver-management and passenger-management that listens for the notifications sent by trip-management microservice. We need to define RedisMessageListenerContainer bean and set message listener implementation class.

@Configuration
public class DriverConfiguration {

	@Autowired
	RedisConnectionFactory redisConnectionFactory;

	@Bean
	RedisMessageListenerContainer container() {
		RedisMessageListenerContainer container = new RedisMessageListenerContainer();
		container.addMessageListener(messageListener(), topic());
		container.setConnectionFactory(redisConnectionFactory);
		return container;
	}

	@Bean
	MessageListenerAdapter messageListener() {
		return new MessageListenerAdapter(new DriverSubscriber());
	}

	@Bean
	ChannelTopic topic() {
		return new ChannelTopic("trips");
	}

}

The class responsible for handling incoming notification needs to implement MessageListener interface. After receiving message DriverSubscriber deserializes it from JSON to object and change driver status.

@Service
public class DriverSubscriber implements MessageListener {

	private final Logger LOGGER = LoggerFactory.getLogger(DriverSubscriber.class);

	@Autowired
	DriverRepository repository;
	ObjectMapper mapper = new ObjectMapper();

	@Override
	public void onMessage(Message message, byte[] bytes) {
		try {
			Trip trip = mapper.readValue(message.getBody(), Trip.class);
			LOGGER.info("Message received: {}", trip.toString());
			Optional<Driver> optDriver = repository.findById(trip.getDriverId());
			if (optDriver.isPresent()) {
				Driver driver = optDriver.get();
				if (trip.getStatus() == TripStatus.DONE)
					driver.setStatus(DriverStatus.WAITING);
				else
					driver.setStatus(DriverStatus.BUSY);
				repository.save(driver);
			}
		} catch (IOException e) {
			LOGGER.error("Error reading message", e);
		}
	}

}

Redis as Primary Database

Although the main purpose of using Redis is in-memory caching or key/value store it may also act as a primary database for your application. In that case it is worth to run Redis in persistent mode.

$ docker run -d --name redis -p 6379:6379 redis redis-server --appendonly yes

Entities are stored inside Redis using hash operations and mmap structure. Each entity needs to have a hash key and id.

@RedisHash("driver")
public class Driver {

	@Id
	private Long id;
	private String name;
	@GeoIndexed
	private Point location;
	private DriverStatus status;

	// setters and getters ...
}

Fortunately, Spring Data Redis provides well-known repositories pattern for Redis integration. To enable it we should annotate configuration or main class with @EnableRedisRepositories. When using Spring repositories pattern we don’t have to build any queries to Redis by ourselves.

@Configuration
@EnableRedisRepositories
public class DriverConfiguration {
	// logic ...
}

With Spring Data repositories we don’t have build any Redis queries, but just name methods following Spring Data convention. For more details, you may refer to my previous article Introduction to Spring Data Redis. For our sample purposes we can use default methods implemented inside Spring Data. Here’s declaration of repository interface in driver-management.

public interface DriverRepository extends CrudRepository<Driver, Long> {}

Don’t forget to enable Spring Data repositories by annotating the main application class or configuration class with @EnableRedisRepositories.

@Configuration
@EnableRedisRepositories
public class DriverConfiguration {
	...
}

Conclusion

As I have mentioned in the preface there are various use cases for Redis in microservices architecture. I have just presented how you can easily use it together with Spring Cloud and Spring Data to provide configuration server, message broker and database. Redis is commonly considered as a cache, but I hope that after reading this article you will change your mind about it. The sample applications source code is as usual available on GitHub: https://github.com/piomin/sample-redis-microservices.git.

Kotlin Microservices with Micronaut, Spring Cloud and JPA

Micronaut Framework provides support for Kotlin built upon Kapt compiler plugin. It also implements the most popular cloud-native patterns like distributed configuration, service discovery and client-side load balancing. These features allows to include your application built on top of Micronaut into the existing microservices-based system. The most popular example of such approach may be an integration with Spring Cloud ecosystem. If you have already used Spring Cloud, it is very likely you built your microservices-based architecture using Eureka discovery server and Spring Cloud Config as a configuration server. Beginning from version 1.1 Micronaut supports both these popular tools being a part of Spring Cloud project. That’s a good news, because in version 1.0 the only supported distributed solution was Consul, and there were no possibility to use Eureka discovery together with Consul property source (running them together ends with exception).

In this article you will learn how to:

  • Configure Micronaut Maven support for Kotlin using Kapt compiler
  • Implement microservices with Micronaut and Kotlin
  • Integrate Micronaut with Spring Cloud Eureka discovery server
  • Integrate Micronaut with Spring Cloud Config server
  • Configure JPA/Hibernate support for application built on top Micronaut
  • For simplification we run a single instance of PostgreSQL shared between all sample microservices

Our architecture is pretty similar to the architecture described in my previous article about Micronaut Quick Guide to Microservice with Micronaut Framework. We also have three microservice that communicate to each other. We use Spring Cloud Eureka and Spring Cloud Config for discovery and distributed configuration instead of Consul. Every service has backend store – PostgreSQL database. This architecture has been visualized on the following picture.

micronaut-2-arch (1).png

After that short introduction we may proceed to the development. Let’s begin from configuring Kotlin support for Micronaut.

1. Kotlin with Micronaut – configuration

Support for Kotlin with Kapt compiler plugin is described well on Micronaut docs site (https://docs.micronaut.io/1.1.0.M1/guide/index.html#kotlin). However, I decided to use Maven instead of Gradle, so our configuration will be slightly different than instructions for Gradle. We configure Kapt inside Maven plugin for Kotlin kotlin-maven-plugin. Thanks to that Kapt will create Java “stub” classes for each of your Kotlin classes, which can then be processed by Micronaut’s Java annotation processor. The Micronaut annotation processors are declared inside tag annotationProcessorPaths in the configuration section. Here’s the full Maven configuration to provide support for Kotlin. Besides core library micronaut-inject-java, we also use annotations from tracing, openapi and JPA libraries.

<plugin>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-maven-plugin</artifactId>
	<dependencies>
		<dependency>
			<groupId>org.jetbrains.kotlin</groupId>
			<artifactId>kotlin-maven-allopen</artifactId>
			<version>${kotlin.version}</version>
		</dependency>
	</dependencies>
	<configuration>
		<jvmTarget>1.8</jvmTarget>
	</configuration>
	<executions>
		<execution>
			<id>compile</id>
			<phase>compile</phase>
			<goals>
				<goal>compile</goal>
			</goals>
		</execution>
		<execution>
			<id>test-compile</id>
			<phase>test-compile</phase>
			<goals>
				<goal>test-compile</goal>
			</goals>
		</execution>
		<execution>
			<id>kapt</id>
			<goals>
				<goal>kapt</goal>
			</goals>
			<configuration>
				<sourceDirs>
					<sourceDir>src/main/kotlin</sourceDir>
				</sourceDirs>
				<annotationProcessorPaths>
					<annotationProcessorPath>
						<groupId>io.micronaut</groupId>
						<artifactId>micronaut-inject-java</artifactId>
						<version>${micronaut.version}</version>
					</annotationProcessorPath>
					<annotationProcessorPath>
						<groupId>io.micronaut.configuration</groupId>
						<artifactId>micronaut-openapi</artifactId>
						<version>${micronaut.version}</version>
					</annotationProcessorPath>
					<annotationProcessorPath>
						<groupId>io.micronaut</groupId>
						<artifactId>micronaut-tracing</artifactId>
						<version>${micronaut.version}</version>
					</annotationProcessorPath>
					<annotationProcessorPath>
						<groupId>javax.persistence</groupId>
						<artifactId>javax.persistence-api</artifactId>
						<version>2.2</version>
					</annotationProcessorPath>
					<annotationProcessorPath>
						<groupId>io.micronaut.configuration</groupId>
						<artifactId>micronaut-hibernate-jpa</artifactId>
						<version>1.1.0.RC2</version>
					</annotationProcessorPath>
				</annotationProcessorPaths>
			</configuration>
		</execution>
	</executions>
</plugin>

We also should not run maven-compiler-plugin during compilation phase. Kapt compiler generates Java classes, so we don’t need to run any other compilator during the build.

<plugin>
	<groupId>org.apache.maven.plugins</groupId>
	<artifactId>maven-compiler-plugin</artifactId>
	<configuration>
		<proc>none</proc>
		<source>1.8</source>
		<target>1.8</target>
	</configuration>
	<executions>
		<execution>
			<id>default-compile</id>
			<phase>none</phase>
		</execution>
		<execution>
			<id>default-testCompile</id>
			<phase>none</phase>
		</execution>
		<execution>
			<id>java-compile</id>
			<phase>compile</phase>
			<goals>
				<goal>compile</goal>
			</goals>
		</execution>
		<execution>
			<id>java-test-compile</id>
			<phase>test-compile</phase>
			<goals>
				<goal>testCompile</goal>
			</goals>
		</execution>
	</executions>
</plugin>

Finally, we will add Kotlin core library and Jackson module for JSON serialization.

<dependency>
	<groupId>com.fasterxml.jackson.module</groupId>
	<artifactId>jackson-module-kotlin</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-stdlib-jdk8</artifactId>
	<version>${kotlin.version}</version>
</dependency>

If you are running the application with Intellij you should first enable annotation processing. To do that go to Build, Execution, Deployment -> Compiler -> Annotation Processors as shown below.

micronaut-2-1

2. Running Postgres

Before proceeding to the development we have to start instance of PostgreSQL database. It will be started as a Docker container. For me, PostgreSQL is now available under address 192.168.99.100:5432, because I’m using Docker Toolbox.

$ docker run -d --name postgres -e POSTGRES_USER=micronaut -e POSTGRES_PASSWORD=123456 -e POSTGRES_DB=micronaut -p 5432:5432 postgres

3. Enabling Hibernate for Micronaut

Hibernate configuration is a little harder for Micronaut than for Spring Boot. We don’t have any projects like Spring Data JPA, where almost all is auto-configured. Besides specific JDBC driver for integration with database, we have to include the following dependencies. We may choose between three available libraries providing datasource implementation: Tomcat, Hikari or DBCP.

<dependency>
	<groupId>org.postgresql</groupId>
	<artifactId>postgresql</artifactId>
	<version>42.2.5</version>
</dependency>
<dependency>
	<groupId>io.micronaut.configuration</groupId>
	<artifactId>micronaut-jdbc-hikari</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut.configuration</groupId>
	<artifactId>micronaut-hibernate-jpa</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut.configuration</groupId>
	<artifactId>micronaut-hibernate-validator</artifactId>
</dependency>

The next step is to provide some configuration settings. All the properties will be stored on the configuration server. We have to set database connection settings and credentials. The JPA configuration settings are provided under jpa.* key. We force Hibernate to update database on application startup and print all the SQL logs (only for tests).

datasources:
  default:
    url: jdbc:postgresql://192.168.99.100:5432/micronaut?ssl=false
    username: micronaut
    password: 123456
    driverClassName: org.postgresql.Driver
jpa:
  default:
    packages-to-scan:
      - 'pl.piomin.services.department.model'
    properties:
      hibernate:
        hbm2ddl:
          auto: update
        show_sql: true

Here’s our sample domain object.

@Entity
data class Department(@Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "department_id_seq") @SequenceGenerator(name = "department_id_seq", sequenceName = "department_id_seq") var id: Long,
                      var organizationId: Long, var name: String) {

    @Transient
    var employees: MutableList<Employee> = mutableListOf()

}

The repository bean needs to inject EntityManager using @PersistentContext and @CurrentSession annotations. All functions needs to be annotated with @Transactional, what requires the methods not to be final (open modifier in Kotlin).

@Singleton
open class DepartmentRepository(@param:CurrentSession @field:PersistenceContext val entityManager: EntityManager) {

    @Transactional
    open fun add(department: Department): Department {
        entityManager.persist(department)
        return department
    }

    @Transactional(readOnly = true)
    open fun findById(id: Long): Department = entityManager.find(Department::class.java, id)

    @Transactional(readOnly = true)
    open fun findAll(): List<Department> = entityManager.createQuery("SELECT d FROM Department d").resultList as List<Department>

    @Transactional(readOnly = true)
    open fun findByOrganization(organizationId: Long) = entityManager.createQuery("SELECT d FROM Department d WHERE d.organizationId = :orgId")
            .setParameter("orgId", organizationId)
            .resultList as List<Department>

}

4. Running Spring Cloud Config Server

Running Spring Cloud Config server is very simple. I have already described that in some of my previous articles. All those were prepared for Java, while today we start it as Kotlin application. Here’s our main class. It should be annotated with @EnableConfigServer.

@SpringBootApplication
@EnableConfigServer
class ConfigApplication

fun main(args: Array<String>) {
    runApplication<ConfigApplication>(*args)
}

Besides Kotlin core dependency we need to include artifact spring-cloud-config-server.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-config-server</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-stdlib-jdk8</artifactId>
	<version>${kotlin.version}</version>
</dependency>

By default, config server tries to use Git as properties source backend. We prefer using classpath resources, what’s much simpler for our tests. To do that, we have to enable native profile. We will also set server port to 8888.

spring:
  application:
    name: config-service
  profiles:
    active: native
server:
  port: 8888

If you place all configuration under directory /src/main/resources/config they will be automatically load after start.

micronaut-2-2

Here’s configuration file for department-service.

micronaut:
  server:
    port: -1
  router:
    static-resources:
      swagger:
        paths: classpath:META-INF/swagger
        mapping: /swagger/**
datasources:
  default:
    url: jdbc:postgresql://192.168.99.100:5432/micronaut?ssl=false
    username: micronaut
    password: 123456
    driverClassName: org.postgresql.Driver
jpa:
  default:
    packages-to-scan:
      - 'pl.piomin.services.department.model'
    properties:
      hibernate:
        hbm2ddl:
          auto: update
        show_sql: true
endpoints:
  info:
    enabled: true
    sensitive: false
eureka:
  client:
    registration:
      enabled: true
    defaultZone: "localhost:8761"

5. Running Eureka Server

Eureka server will also be run as Spring Boot application written in Kotlin.

@SpringBootApplication
@EnableEurekaServer
class DiscoveryApplication

fun main(args: Array<String>) {
    runApplication<DiscoveryApplication>(*args)
}

We also needs to include a single dependency spring-cloud-starter-netflix-eureka-server besides kotlin-stdlib-jdk8.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-stdlib-jdk8</artifactId>
	<version>${kotlin.version}</version>
</dependency>

We run standalone instance of Eureka on port 8761.

spring:
  application:
    name: discovery-service
server:
  port: 8761
eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/

6. Integrating Micronaut with Spring Cloud

The implementation of distributed configuration client is automatically included to Micronaut core. We only need to include module for service discovery.

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-discovery-client</artifactId>
</dependency>

We don’t have to place anything in the source code. All the features can be enabled via configuration settings. First, we need to enable config client by setting property micronaut.config-client.enabled to true. The next step is to enable specific implementation of config client – in that case Spring Cloud Config, and then set target url.

micronaut:
  application:
    name: department-service
  config-client:
    enabled: true
spring:
  cloud:
    config:
      enabled: true
      uri: http://localhost:8888/

Each application fetches properties from configuration server. The part of configuration responsible for enabling discovery based on Eureka server is visible below.

eureka:
  client:
    registration:
      enabled: true
    defaultZone: "localhost:8761"

7. Running applications

Kapt needs to be able to compile Kotlin code to Java succesfully. That’s why we place method inside class declaration, and annotate it with @JvmStatic. The main class visible below is also annotated with @OpenAPIDefinition in order to generate Swagger definition for API methods.

@OpenAPIDefinition(
        info = Info(
                title = "Departments Management",
                version = "1.0",
                description = "Department API",
                contact = Contact(url = "https://piotrminkowski.wordpress.com", name = "Piotr Mińkowski", email = "piotr.minkowski@gmail.com")
        )
)
open class DepartmentApplication {

    companion object {
        @JvmStatic
        fun main(args: Array<String>) {
            Micronaut.run(DepartmentApplication::class.java)
        }
    }
	
}

Here’s the controller class from department-service. It injects repository bean for database integration and EmployeeClient for HTTP communication with employee-service.

@Controller("/departments")
open class DepartmentController(private val logger: Logger = LoggerFactory.getLogger(DepartmentController::class.java)) {

    @Inject
    lateinit var repository: DepartmentRepository
    @Inject
    lateinit var employeeClient: EmployeeClient

    @Post
    fun add(@Body department: Department): Department {
        logger.info("Department add: {}", department)
        return repository.add(department)
    }

    @Get("/{id}")
    fun findById(id: Long): Department? {
        logger.info("Department find: id={}", id)
        return repository.findById(id)
    }

    @Get
    fun findAll(): List<Department> {
        logger.info("Department find")
        return repository.findAll()
    }

    @Get("/organization/{organizationId}")
    @ContinueSpan
    open fun findByOrganization(@SpanTag("organizationId") organizationId: Long): List<Department> {
        logger.info("Department find: organizationId={}", organizationId)
        return repository.findByOrganization(organizationId)
    }

    @Get("/organization/{organizationId}/with-employees")
    @ContinueSpan
    open fun findByOrganizationWithEmployees(@SpanTag("organizationId") organizationId: Long): List<Department> {
        logger.info("Department find: organizationId={}", organizationId)
        val departments = repository.findByOrganization(organizationId)
        departments.forEach { it.employees = employeeClient.findByDepartment(it.id) }
        return departments
    }

}

It is worth to take a look on HTTP client implementation. It has been discussed in the details in my last article about Micronaut Quick Guide to Microservice with Micronaut Framework.

@Client(id = "employee-service", path = "/employees")
interface EmployeeClient {

	@Get("/department/{departmentId}")
	fun findByDepartment(departmentId: Long): MutableList<Employee>
	
}

You can run all the microservice using IntelliJ. You may also build the whole project with Maven using mvn clean install command, and then run them using java -jar command. Thanks to maven-shade-plugin applications will be generated as uber jars. Then run them in the following order: config-service, discovery-service and microservices.

$ java -jar config-service/target/config-service-1.0-SNAPSHOT.jar
$ java -jar discovery-service/target/discovery-service-1.0-SNAPSHOT.jar
$ java -jar employee-service/target/employee-service-1.0-SNAPSHOT.jar
$ java -jar department-service/target/department-service-1.0-SNAPSHOT.jar
$ java -jar organization-service/target/organization-service-1.0-SNAPSHOT.jar

After you may take a look on Eureka dashboard available under address http://localhost:8761 to see the list of running services. You may also perform some tests by running HTTP API methods.

micronaut-2-3

Summary

The sample applications source code is available on GitHub in the repository sample-micronaut-microservices in the branch kotlin. You can refer to that repository for more implementation details that has not been included in the article.

Quick Guide to Microservices with Micronaut Framework

Micronaut framework has been introduced as an alternative to Spring Boot for building microservice applications. At first glance it is very similar to Spring. It also implements such patterns like dependency injection and inversion of control based on annotations, however it uses JSR-330 (java.inject) for doing it. It has been designed specially in order to building serverless functions, Android applications, and low memory-footprint microservices. This means that it should faster startup time, lower memory usage or easier unit testing than competitive frameworks. However, today I don’t want to focus on those characteristics of Micronaut. I’m going to show you how to build simple microservices-based system using this framework. You can easily compare it with Spring Boot and Spring Cloud by reading my previous article about the same subject Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud. Does Micronaut have a change to gain the same popularity as Spring Boot? Let’s find out.

Our sample system consists of three independent microservices that communicate with each other. All of them integrate with Consul in order to fetch shared configuration. After startup every single service will register itself in Consul. Applications organization-service and department-service call endpoints exposed by other microservices using Micronaut declarative HTTP client. The traces from communication are sending to Zipkin. The source code of sample applications is available on GitHub in repository sample-micronaut-microservices.

micronaut-arch (1).png

Step 1. Creating application

We need to start by including some dependencies to our Maven pom.xml. First let’s define BOM with the newest stable Micronaut version.

<properties>
	<exec.mainClass>pl.piomin.services.employee.EmployeeApplication</exec.mainClass>
	<micronaut.version>1.0.3</micronaut.version>
	<jdk.version>1.8</jdk.version>
</properties>
<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>io.micronaut</groupId>
			<artifactId>micronaut-bom</artifactId>
			<version>${micronaut.version}</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

The list of required dependencies isn’t very long. Also not all of them are required, but they will be useful in our demo. For example micronaut-management need to be included in case we would like to expose some built-in management and monitoring endpoints.

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-http-server-netty</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-inject</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-runtime</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-management</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-inject-java</artifactId>
	<scope>provided</scope>
</dependency>

To build application uber-jar we need configure plugin responsible for packaging JAR file with dependencies. It can be for example maven-shade-plugin. When building new application it is also worth to expose basic information about it under /info endpoint. As I have already mentioned Micronaut adds support for monitoring your app via HTTP endpoints after including artifact micronaut-management. Management endpoint are integrated with Micronaut security module, what means that you need to authenticate yourself to be able to access them. To simplify we can disable authentication for /info endpoint.

endpoints:
  info:
    enabled: true
    sensitive: false

We can customize /info endpoint by adding some supported info sources. This mechanism is very similar to Spring Boot Actuator approach. If git.properties file is available on the classpath, all the values inside file will be exposed by /info endpoint. The same situation applies to build-info.properties file, that needs to be placed inside META-INF directory. However, in comparison with Spring Boot we need to provide more configuration in pom.xml to generate and package those to application JAR. The following Maven plugins are responsible for generating required properties files.

<plugin>
	<groupId>pl.project13.maven</groupId>
	<artifactId>git-commit-id-plugin</artifactId>
	<version>2.2.6</version>
	<executions>
		<execution>
			<id>get-the-git-infos</id>
			<goals>
				<goal>revision</goal>
			</goals>
		</execution>
	</executions>
	<configuration>
		<verbose>true</verbose>
		<dotGitDirectory>${project.basedir}/.git</dotGitDirectory>
		<dateFormat>MM-dd-yyyy '@' HH:mm:ss Z</dateFormat>
		<generateGitPropertiesFile>true</generateGitPropertiesFile>
		<generateGitPropertiesFilename>src/main/resources/git.properties</generateGitPropertiesFilename>
		<failOnNoGitDirectory>true</failOnNoGitDirectory>
	</configuration>
</plugin>
<plugin>
	<groupId>com.rodiontsev.maven.plugins</groupId>
	<artifactId>build-info-maven-plugin</artifactId>
	<version>1.2</version>
	<configuration>
		<filename>classes/META-INF/build-info.properties</filename>
		<projectProperties>
			<projectProperty>project.groupId</projectProperty>
			<projectProperty>project.artifactId</projectProperty>
			<projectProperty>project.version</projectProperty>
		</projectProperties>
	</configuration>
	<executions>
		<execution>
			<phase>prepare-package</phase>
			<goals>
				<goal>extract</goal>
			</goals>
		</execution>
	</executions>
</plugin>
</plugins>

Now, our /info endpoint is able to print the most important information about our app including Maven artifact name, version, and last Git commit id.

micronaut-2

Step 2. Exposing HTTP endpoints

Micronaut provides their own annotations for pointing out HTTP endpoints and methods. As I have mentioned in the preface it also uses JSR-330 (java.inject) for dependency injection. Our controller class should be annotated with @Controller. We also have annotations for every HTTP method type. The path parameter is automatically mapped to the class method parameter by its name, what is a nice simplification in comparison to Spring MVC where we need to use @PathVariable annotation. The repository bean used for CRUD operations is injected into controller using @Inject annotation.

@Controller("/employees")
public class EmployeeController {

    private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);

    @Inject
    EmployeeRepository repository;

    @Post
    public Employee add(@Body Employee employee) {
        LOGGER.info("Employee add: {}", employee);
        return repository.add(employee);
    }

    @Get("/{id}")
    public Employee findById(Long id) {
        LOGGER.info("Employee find: id={}", id);
        return repository.findById(id);
    }

    @Get
    public List<Employee> findAll() {
        LOGGER.info("Employees find");
        return repository.findAll();
    }

    @Get("/department/{departmentId}")
    @ContinueSpan
    public List<Employee> findByDepartment(@SpanTag("departmentId") Long departmentId) {
        LOGGER.info("Employees find: departmentId={}", departmentId);
        return repository.findByDepartment(departmentId);
    }

    @Get("/organization/{organizationId}")
    @ContinueSpan
    public List<Employee> findByOrganization(@SpanTag("organizationId") Long organizationId) {
        LOGGER.info("Employees find: organizationId={}", organizationId);
        return repository.findByOrganization(organizationId);
    }

}

Our repository bean is pretty simple. It just provides in-memory store for Employee instances. We will mark it with @Singleton annotation.

@Singleton
public class EmployeeRepository {

	private List<Employee> employees = new ArrayList<>();
	
	public Employee add(Employee employee) {
		employee.setId((long) (employees.size()+1));
		employees.add(employee);
		return employee;
	}
	
	public Employee findById(Long id) {
		Optional<Employee> employee = employees.stream().filter(a -> a.getId().equals(id)).findFirst();
		if (employee.isPresent())
			return employee.get();
		else
			return null;
	}
	
	public List<Employee> findAll() {
		return employees;
	}
	
	public List<Employee> findByDepartment(Long departmentId) {
		return employees.stream().filter(a -> a.getDepartmentId().equals(departmentId)).collect(Collectors.toList());
	}
	
	public List<Employee> findByOrganization(Long organizationId) {
		return employees.stream().filter(a -> a.getOrganizationId().equals(organizationId)).collect(Collectors.toList());
	}
	
}

Micronaut is able to automatically generate Swagger YAML definition from our controller and methods basing on annotations. To achieve this, we first need to include the following dependency to our pom.xml.

<dependency>
	<groupId>io.swagger.core.v3</groupId>
	<artifactId>swagger-annotations</artifactId>
</dependency>

Then we should annotate application main class with @OpenAPIDefinition and provide some basic information like title or version number. Here’s employee application main class.

@OpenAPIDefinition(
    info = @Info(
        title = "Employees Management",
        version = "1.0",
        description = "Employee API",
        contact = @Contact(url = "https://piotrminkowski.wordpress.com", name = "Piotr Mińkowski", email = "piotr.minkowski@gmail.com")
    )
)
public class EmployeeApplication {

    public static void main(String[] args) {
        Micronaut.run(EmployeeApplication.class);
    }

}

Micronaut generates Swagger file basing on title and version fields inside @Info annotation. In that case our YAML definition file is available under name employees-management-1.0.yml, and will be generated to the META-INF/swagger directory. We can expose it outside application using HTTP endpoint. Here’s the appropriate configuration provided inside application.yml file.

micronaut:
  router:
    static-resources:
      swagger:
        paths: classpath:META-INF/swagger
        mapping: /swagger/**

Now, our file is available under path http://localhost:8080/swagger/employees-management-1.0.yml if run it on default 8080 port (we won’t do that, what I’m going to describe in the next part of this article). In comparison to Spring Boot, we don’t have such project like Swagger SpringFox for Micronaut, so we need to copy the content to online editor in order to see the graphical representation of Swagger YAML. Here’s it.

micronaut-1.PNG

Ok, since we have finished implementation of single microservice we may proceed to cloud-native features provided by Micronaut.

Step 3. Distributed configuration

Micronaut comes with built in APIs for doing distributed configuration. In fact, the only one available solution for now is distributed configuration based on HashiCorp’s Consul. Micronaut features for externalizing and adapting configuration to the environment are very similar to the Spring Boot approach. We also have application.yml and bootstrap.yml files, which can be used for application environment configuration. When using distributed configuration we first need to provide bootstrap.yml file on the classpath. It should contains an address of remote configuration server and preferred configuration store format. Of course, we first need to enable distributed configuration client by setting property micronaut.config-client.enabled to true. Here’s bootstrap.yml file for department-service.

micronaut:
  application:
    name: department-service
  config-client:
    enabled: true
consul:
  client:
    defaultZone: "192.168.99.100:8500"
    config:
      format: YAML

We can choose between properties, JSON, YAML and FILES (git2consul) configuration formats. I decided to use YAML. To apply this configuration on Consul we first need to start it locally in development mode. Because I’m using Docker Toolbox the default address of Consul is 192.168.99.100. The following Docker command will start single-node Consul instance and expose it on port 8500.

$ docker run -d --name consul -p 8500:8500 consul

Now, you can navigate to the tab Key/Value in Consul web console and create new file in YAML format /config/application.yml as shown below. Besides configuration for Swagger and /info management endpoint it also enables dynamic HTTP generation on startup by setting property micronaut.server.port to -1. Because, the name of file is application.yml it is by default shared between all microservices that uses Consul config client.

micronaut-2

Step 4. Service discovery

Micronaut gives you more options when configuring service discovery, than for distributed configuration. You can use Eureka, Consul, Kubernetes or just manually configure list of available services. However, I have observed that using Eureka discovery client together with Consul config client causes some errors on startup. In this example we will use Consul discovery. Because Consul address has been already provided in bootstrap.yml for every microservice, we just need to enable service discovery by adding the following lines to application.yml stored in Consul KV.

consul:
  client:
    registration:
      enabled: true

We should also include the following dependency to Maven pom.xml of every single application.

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-discovery-client</artifactId>
</dependency>

Finally, you can just run every microservice (you may run more than one instance locally, since HTTP port is generated dynamically). Here’s my list of running services registered in Consul.

micronaut-3

I have run two instances of employee-service as shown below.

micronaut-4

Step 5. Inter-service communication

Micronaut uses build-in HTTP client for load balancing between multiple instances of single microservice. By default it leverages Round Robin algorithm. We may choose between low-level HTTP client and declarative HTTP client with @Client. Micronaut declarative HTTP client concept is very similar to Spring Cloud OpenFeign. To use built-in client we first need to include the following dependency to project pom.xml.

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-http-client</artifactId>
</dependency>

Declarative client automatically integrates with a discovery client. It tries to find the service registered in Consul under the same name as value provided inside id field.

@Client(id = "employee-service", path = "/employees")
public interface EmployeeClient {

	@Get("/department/{departmentId}")
	List<Employee> findByDepartment(Long departmentId);
	
}

Now, the client bean needs to be injected into the controller.

@Controller("/departments")
public class DepartmentController {

	private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentController.class);
	
	@Inject
	DepartmentRepository repository;
	@Inject
	EmployeeClient employeeClient;
	
	@Post
	public Department add(@Body Department department) {
		LOGGER.info("Department add: {}", department);
		return repository.add(department);
	}
	
	@Get("/{id}")
	public Department findById(Long id) {
		LOGGER.info("Department find: id={}", id);
		return repository.findById(id);
	}
	
	@Get
	public List<Department> findAll() {
		LOGGER.info("Department find");
		return repository.findAll();
	}
	
	@Get("/organization/{organizationId}")
	@ContinueSpan
	public List<Department> findByOrganization(@SpanTag("organizationId") Long organizationId) {
		LOGGER.info("Department find: organizationId={}", organizationId);
		return repository.findByOrganization(organizationId);
	}
	
	@Get("/organization/{organizationId}/with-employees")
	@ContinueSpan
	public List<Department> findByOrganizationWithEmployees(@SpanTag("organizationId") Long organizationId) {
		LOGGER.info("Department find: organizationId={}", organizationId);
		List<Department> departments = repository.findByOrganization(organizationId);
		departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
		return departments;
	}
	
}

Step 6. Distributed tracing

Micronaut application can be easily integrated with Zipkin to send there traces with HTTP traffic automatically. To enable this feature we first need to include the following dependencies to pom.xml.

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-tracing</artifactId>
</dependency>
<dependency>
	<groupId>io.zipkin.brave</groupId>
	<artifactId>brave-instrumentation-http</artifactId>
	<scope>runtime</scope>
</dependency>
<dependency>
	<groupId>io.zipkin.reporter2</groupId>
	<artifactId>zipkin-reporter</artifactId>
	<scope>runtime</scope>
</dependency>
<dependency>
	<groupId>io.opentracing.brave</groupId>
	<artifactId>brave-opentracing</artifactId>
</dependency>

Then, we have to provide some configuration settings inside application.yml including Zipkin URL and sampler options. By setting property tracing.zipkin.sampler.probability to 1 we are forcing micronaut to send traces for every single request. Here’s our final configuration.

micronaut-5

During the tests of my application I have observed that using distributed configuration together with Zipkin tracing results in the problems in communication between microservice and Zipkin. The traces just do not appear in Zipkin. So, if you would like to test this feature now you must provide application.yml on the classpath and disable Consul distributed configuration for all your applications.

We can add some tags to the spans by using @ContinueSpan or @NewSpan annotations on methods.

After making some test calls of GET methods exposed by organization-service and department-service we may take a look on Zipkin web console, available under address http://192.168.99.100:9411. The following picture shows the list of all the traces sent to Zipkin by our microservices in 1 hour.

micronaut-7

We can check out the details of every trace by clicking on the element from the list. The following picture illustrates the timeline for HTTP method exposed by organization-service GET /organizations/{id}/with-departments-and-employees. This method finds the organization in the in-memory repository, and then calls HTTP method exposed by department-service GET /departments/organization/{organizationId}/with-employees. This method is responsible for finding all departments assigned to the given organization. It also needs to return employees within department, so it calls method GET /employees/department/{departmentId} from employee-service.

micronaut-8

We can also take a look on the details of every single call from the timeline.

micronaut-9

Conclusion

In comparison to Spring Boot Micronaut is still in the early stage of development. For example, I were not able to implement any application that could acts as an API gateway to our system, what can easily achieved with Spring using Spring Cloud Gateway or Spring Cloud Netflix Zuul. There are still some bugs that needs to be fixed. But above all that, Micronaut is now probably the most interesting micro-framework on the market. It implements most popular microservice patterns, provides integration with several third-party solutions like Consul, Eureka, Zipkin or Swagger, consumes less memory and starts faster than similar Spring Boot app. I will definitely follow the progress in Micronaut development closely.

Kotlin Microservice with Spring Boot

You may find many examples of microservices built with Spring Boot on my blog, but the most of them is written in Java. With the rise in popularity of Kotlin language it is more often used with Spring Boot for building backend services. Starting with version 5 Spring Framework has introduced first-class support for Kotlin. In this article I’m going to show you example of microservice build with Kotlin and Spring Boot 2. I’ll describe some interesting features of Spring Boot, which can treated as a set of good practices when building backend, REST-based microservices.

1. Configuration and dependencies

To use Kotlin in your Maven project you have to include plugin kotlin-maven-plugin, and /src/main/kotlin, /src/test/kotlin directories to the build configuration. We will also set -Xjsr305 compiler flag to strict. This option is responsible for checking support for JSR-305 annotations (for example @NotNull annotation).

<build>
	<sourceDirectory>${project.basedir}/src/main/kotlin</sourceDirectory>
	<testSourceDirectory>${project.basedir}/src/test/kotlin</testSourceDirectory>
	<plugins>
		<plugin>
			<groupId>org.jetbrains.kotlin</groupId>
			<artifactId>kotlin-maven-plugin</artifactId>
			<configuration>
				<args>
					<arg>-Xjsr305=strict</arg>
				</args>
				<compilerPlugins>
					<plugin>spring</plugin>
				</compilerPlugins>
			</configuration>
			<dependencies>
				<dependency>
					<groupId>org.jetbrains.kotlin</groupId>
					<artifactId>kotlin-maven-allopen</artifactId>
					<version>${kotlin.version}</version>
				</dependency>
			</dependencies>
		</plugin>
	</plugins>
</build>

We should also include some core Kotlin libraries like kotlin-stdlib-jdk8 and kotlin-reflect. They are provided by default for a Kotlin project on start.spring.io. For REST-based applications you will also need Jackson library used for JSON serialization/deserialization. Of course, we have to include Spring starters for Web application together with Actuator responsible for providing management endpoints.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
	<groupId>com.fasterxml.jackson.module</groupId>
	<artifactId>jackson-module-kotlin</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-reflect</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-stdlib-jdk8</artifactId>
</dependency>

We use the latest stable version of Spring Boot with Kotlin 1.2.71

<parent>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-parent</artifactId>
	<version>2.1.2.RELEASE</version>
</parent>
<properties>
	<java.version>1.8</java.version>
	<kotlin.version>1.2.71</kotlin.version>
</properties>

2. Building application

Let’s begin from the basics. If you are familiar with Spring Boot and Java, the biggest difference is in the main class declaration. You will call runApplication method outside Spring Boot application class. The main class, the same as in Java, is annotated with @SpringBootApplication.

@SpringBootApplication
class SampleSpringKotlinMicroserviceApplication

fun main(args: Array<String>) {
    runApplication<SampleSpringKotlinMicroserviceApplication>(*args)
}

Our sample application is very simple. It exposes some REST endpoints providing CRUD operations for model object. Even at this fragment of code illustrating controller implementation you can see some nice Kotlin features. We may use shortened function declaration with inferred return type. Annotation @PathVariable does not require any arguments. The input parameter name is considered to be the same as variable name. Of course, we are using the same annotations as with Java. In Kotlin, every property declared as having non-null type must be initialized in the constructor. So, if you are initializing it using dependency injection it has to declared as lateinit. Here’s the implementation of PersonController.

@RestController
@RequestMapping("/persons")
class PersonController {

    @Autowired
    lateinit var repository: PersonRepository

    @GetMapping("/{id}")
    fun findById(@PathVariable id: Int): Person? = repository.findById(id)

    @GetMapping
    fun findAll(): List<Person> = repository.findAll()

    @PostMapping
    fun add(@RequestBody person: Person): Person = repository.save(person)

    @PutMapping
    fun update(@RequestBody person: Person): Person = repository.update(person)

    @DeleteMapping("/{id}")
    fun remove(@PathVariable id: Int): Boolean = repository.removeById(id)

}

Kotlin automatically generates getters and setters for class properties declared as var. Also if you declare model as a data class it generate equals, hashCode, and toString methods. The declaration of our model class Person is very concise as shown below.

data class Person(var id: Int?, var name: String, var age: Int, var gender: Gender)

I have implemented my own in-memory repository class. I use Kotlin extensions for manipulating list of elements. This built-in Kotlin feature is similar to Java streams, with the difference that you don’t have to perform any conversion between Collection and Stream.

@Repository
class PersonRepository {
    val persons: MutableList<Person> = ArrayList()

    fun findById(id: Int): Person? {
        return persons.singleOrNull { it.id == id }
    }

    fun findAll(): List<Person> {
        return persons
    }

    fun save(person: Person): Person {
        person.id = (persons.maxBy { it.id!! }?.id ?: 0) + 1
        persons.add(person)
        return person
    }

    fun update(person: Person): Person {
        val index = persons.indexOfFirst { it.id == person.id }
        if (index >= 0) {
            persons[index] = person
        }
        return person
    }

    fun removeById(id: Int): Boolean {
        return persons.removeIf { it.id == id }
    }

}

The sample application source code is available on GitHub in repository https://github.com/piomin/sample-spring-kotlin-microservice.git.

3. Enabling Actuator endpoints

Since we have already included Spring Boot starter with Actuator into the application code, we can take advantage of its production-ready features. Spring Boot Actuator gives you very powerful tools for monitoring and managing your apps. You can provide advanced healthchecks, info endpoints or send metrics to numerous monitoring systems like InfluxDB. After including Actuator artifacts the only thing we have to do is to enable all its endpoint for our application via HTTP.

management.endpoints.web.exposure.include: '*'

We can customize Actuator endpoints to provide more details about our app. A good practice is to expose information about version and git commit to info endpoint. As usual Spring Boot provides auto-configuration for such features, so the only thing we need to do is to include some Maven plugins to build configuration in pom.xml. The goal build-info set for spring-boot-maven-plugin forces it to generate properties file with basic information about version. The file is located in directory META-INF/build-info.properties. Plugin git-commit-id-plugin will generate git.properties file in the root directory.

<plugin>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-maven-plugin</artifactId>
	<executions>
		<execution>
			<goals>
				<goal>build-info</goal>
			</goals>
		</execution>
	</executions>
</plugin>
<plugin>
	<groupId>pl.project13.maven</groupId>
	<artifactId>git-commit-id-plugin</artifactId>
	<configuration>
		<failOnNoGitDirectory>false</failOnNoGitDirectory>
	</configuration>
</plugin>

Now you should just build your application using mvn clean install command and then run it.

$ java -jar target\sample-spring-kotlin-microservice-1.0-SNAPSHOT.jar

The info endpoint is available under address http://localhost:8080/actuator/info. It exposes all interesting information for us.

{
	"git":{
		"commit":{
			"time":"2019-01-14T16:20:31Z",
			"id":"f7cb437"
		},
		"branch":"master"
	},
	"build":{
		"version":"1.0-SNAPSHOT",
		"artifact":"sample-spring-kotlin-microservice",
		"name":"sample-spring-kotlin-microservice",
		"group":"pl.piomin.services",
		"time":"2019-01-15T09:18:48.836Z"
	}
}

4. Enabling API documentation

Build info and git properties may be easily injected into the application code. It can be useful in some cases. One of that case is if you have enabled auto-generated API documentation. The most popular tools using for it is Swagger. You can easily integrate Swagger2 with Spring Boot using SpringFox Swagger project. First, you need to include the following dependencies to your pom.xml.

<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.9.2</version>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger-ui</artifactId>
	<version>2.9.2</version>
</dependency>

Then, you should enable Swagger by annotating configuration class with @EnableSwagger2. Required informations are available inside beans BuildProperties and GitProperties. We just have to inject them into Swagger configuration class as shown below. We set them as optional to prevent from application startup failure in case they are not present on classpath.

@Configuration
@EnableSwagger2
class SwaggerConfig {

    @Autowired
    lateinit var build: Optional<BuildProperties>
    @Autowired
    lateinit var git: Optional<GitProperties>

    @Bean
    fun api(): Docket {
        var version = "1.0"
        if (build.isPresent && git.isPresent) {
            var buildInfo = build.get()
            var gitInfo = git.get()
            version = "${buildInfo.version}-${gitInfo.shortCommitId}-${gitInfo.branch}"
        }
        return Docket(DocumentationType.SWAGGER_2)
                .apiInfo(apiInfo(version))
                .select()
                .apis(RequestHandlerSelectors.any())
                .paths{ it.equals("/persons")}
                .build()
                .useDefaultResponseMessages(false)
                .forCodeGeneration(true)
    }

    @Bean
    fun uiConfig(): UiConfiguration {
        return UiConfiguration(java.lang.Boolean.TRUE, java.lang.Boolean.FALSE, 1, 1, ModelRendering.MODEL, java.lang.Boolean.FALSE, DocExpansion.LIST, java.lang.Boolean.FALSE, null, OperationsSorter.ALPHA, java.lang.Boolean.FALSE, TagsSorter.ALPHA, UiConfiguration.Constants.DEFAULT_SUBMIT_METHODS, null)
    }

    private fun apiInfo(version: String): ApiInfo {
        return ApiInfoBuilder()
                .title("API - Person Service")
                .description("Persons Management")
                .version(version)
                .build()
    }

}

The documentation is available under context path /swagger-ui.html. Besides API documentation is displays the full information about application version, git commit id and branch name.

kotlin-microservices-1.PNG

5. Choosing your app server

Spring Boot Web can be ran on three different embedded servers: Tomcat, Jetty or Undertow. By default it uses Tomcat. To change the default server you just need include the suitable Spring Boot starter and exclude spring-boot-starter-tomcat. The good practice may be to enable switching between servers during application build. You can achieve it by declaring Maven profiles as shown below.

<profiles>
	<profile>
		<id>tomcat</id>
		<activation>
			<activeByDefault>true</activeByDefault>
		</activation>
		<dependencies>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-web</artifactId>
			</dependency>
		</dependencies>
	</profile>
	<profile>
		<id>jetty</id>
		<dependencies>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-web</artifactId>
				<exclusions>
					<exclusion>
						<groupId>org.springframework.boot</groupId>
						<artifactId>spring-boot-starter-tomcat</artifactId>
					</exclusion>
				</exclusions>
			</dependency>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-jetty</artifactId>
			</dependency>
		</dependencies>
	</profile>
	<profile>
		<id>undertow</id>
		<dependencies>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-web</artifactId>
				<exclusions>
					<exclusion>
						<groupId>org.springframework.boot</groupId>
						<artifactId>spring-boot-starter-tomcat</artifactId>
					</exclusion>
				</exclusions>
			</dependency>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-undertow</artifactId>
			</dependency>
		</dependencies>
	</profile>
</profiles>

Now, if you would like to enable other server than Tomcat for your application you should activate the appropriate profile during Maven build.

$ mvn clean install -Pjetty

Conclusion

Development of microservices using Kotlin and Spring Boot is nice and simple. Basing on the sample application I have introduces the main Spring Boot features for Kotlin. I also described some good practices you may apply to your microservices when building it using Spring Boot and Kotlin. You can compare described approach with some other micro-frameworks used with Kotlin, for example Ktor described in one of my previous articles Kotlin Microservices with Ktor.

Running Java Microservices on OpenShift using Source-2-Image

One of the reason you would prefer OpenShift instead of Kubernetes is the simplicity of running new applications. When working with plain Kubernetes you need to provide already built image together with the set of descriptor templates used for deploying it. OpenShift introduces Source-2-Image feature used for building reproducible Docker images from application source code. With S2I you don’t have provide any Kubernetes YAML templates or build Docker image by yourself, OpenShift will do it for you. Let’s see how it works. The best way to test it locally is via Minishift. But the first step is to prepare sample applications source code.

1. Prepare application code

I have already described how to run your Java applications on Kubernetes in one of my previous articles Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker. We will use the same source code as used in that article now, so you would be able to compare those two different approaches. Our source code is available on GitHub in repository sample-spring-microservices-new. We will modify a little the version used in Kubernetes by removing Spring Cloud Kubernetes library and including some additional resources. The current version is available in the branch openshift.
Our sample system consists of three microservices which communicate with each other and use Mongo database backend. Here’s the diagram that illustrates our architecture.

s2i-1

Every microservice is a Spring Boot application, which uses Maven as a built tool. After including spring-boot-maven-plugin it is able to generate single fat jar with all dependencies, which is required by source-2-image builder.

<build>
	<plugins>
		<plugin>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-maven-plugin</artifactId>
		</plugin>
	</plugins>
</build>

Every application includes starters for Spring Web, Spring Actuator and Spring Data MongoDB for integration with Mongo database. We will also include libraries for generating Swagger API documentation, and Spring Cloud OpenFeign for these applications which call REST endpoints exposed by other microservices.

<dependencies>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-web</artifactId>
	</dependency>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-actuator</artifactId>
	</dependency>
	<dependency>
		<groupId>io.springfox</groupId>
		<artifactId>springfox-swagger2</artifactId>
		<version>2.9.2>/version<
	</dependency>
	<dependency>
		<groupId>io.springfox</groupId>
		<artifactId>springfox-swagger-ui</artifactId>
		<version>2.9.2</version>
	</dependency>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-data-mongodb</artifactId>
	</dependency>
</dependencies>

Every Spring Boot application exposes REST API for simple CRUD operations on a given resource. The Spring Data repository bean is injected into the controller.

@RestController
@RequestMapping(“/employee”)
public class EmployeeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);
	
	@Autowired
	EmployeeRepository repository;
	
	@PostMapping("/")
	public Employee add(@RequestBody Employee employee) {
		LOGGER.info("Employee add: {}", employee);
		return repository.save(employee);
	}
	
	@GetMapping("/{id}")
	public Employee findById(@PathVariable("id") String id) {
		LOGGER.info("Employee find: id={}", id);
		return repository.findById(id).get();
	}
	
	@GetMapping("/")
	public Iterable<Employee> findAll() {
		LOGGER.info("Employee find");
		return repository.findAll();
	}
	
	@GetMapping("/department/{departmentId}")
	public List<Employee> findByDepartment(@PathVariable("departmentId") Long departmentId) {
		LOGGER.info("Employee find: departmentId={}", departmentId);
		return repository.findByDepartmentId(departmentId);
	}
	
	@GetMapping("/organization/{organizationId}")
	public List<Employee> findByOrganization(@PathVariable("organizationId") Long organizationId) {
		LOGGER.info("Employee find: organizationId={}", organizationId);
		return repository.findByOrganizationId(organizationId);
	}
	
}

The application expects to have environment variables pointing to the database name, user and password.

spring:
  application:
    name: employee
  data:
    mongodb:
      uri: mongodb://${MONGO_DATABASE_USER}:${MONGO_DATABASE_PASSWORD}@mongodb/${MONGO_DATABASE_NAME}

Inter-service communication is realized through OpenFeign declarative REST client. It is included in department and organization microservices.

@FeignClient(name = "employee", url = "${microservices.employee.url}")
public interface EmployeeClient {

	@GetMapping("/employee/organization/{organizationId}")
	List<Employee> findByOrganization(@PathVariable("organizationId") String organizationId);
	
}

The address of the target service accessed by Feign client is set inside application.yml. The communication is realized via OpenShift/Kubernetes services. The name of each service is also injected through an environment variable.

spring:
  application:
    name: organization
  data:
    mongodb:
      uri: mongodb://${MONGO_DATABASE_USER}:${MONGO_DATABASE_PASSWORD}@mongodb/${MONGO_DATABASE_NAME}
microservices:
  employee:
    url: http://${EMPLOYEE_SERVICE}:8080
  department:
    url: http://${DEPARTMENT_SERVICE}:8080

2. Running Minishift

To run Minishift locally you just have to download it from that site, copy minishift.exe (for Windows) to your PATH directory and start using minishift start command. For more details you may refer to my previous article about OpenShift and Java applications Quick guide to deploying Java apps on OpenShift. The current version of Minishift used during writing this article is 1.29.0.
After starting Minishift we need to run some additional oc commands to enable source-2-image for Java apps. First, we add some privileges to user admin to be able to access project openshift. In this project OpenShift stores all the build-in templates and image streams used, for example as S2I builders. Let’s begin from enable admin-user addon.

$ minishift addons apply admin-user

Thanks to that plugin we are able to login to Minishift as cluster admin. Now, we can grant role cluster-admin to user admin.

$ oc login -u system:admin
$ oc adm policy add-cluster-role-to-user cluster-admin admin
$ oc login -u admin -p admin

After that, you can login to web console using credentials admin/admin. You should be able to see project openshift. It is not all. The image used for building runnable Java apps (openjdk18-openshift) is not available by default on Minishift. We can import it manually from RedHat registry using oc import-image command or just enable and apply plugin xpaas. I prefer the second option.

$ minishift addons apply xpaas

Now, you can go to Minishift web console (for me available under address https://192.168.99.100:8443), select project openshift and then navigate to Builds -> Images. You should see the image stream redhat-openjdk18-openshift on the list.

s2i-2

The newest version of that image is 1.3. Surprisingly it is not the newest version on OpenShift Container Platform. There you have version 1.5. However, the newest versions of builder images has been moved to registry.redhat.io, which requires authentication.

3. Deploying Java app using S2I

We are finally able to deploy our app on Minishift with S2I builder. The application source code is ready, and the same with Minishift instance. The first step is to deploy an instance of MongoDB. It is very easy with OpenShift, because Mongo template is available in built-in service catalog. We can provide our own configuration settings or left default values. What’s important for us, OpenShift generates secret, by default available under the name mongodb.

s2i-3

The S2I builder image provided by OpenShift may be used by through the image stream redhat-openjdk18-openshift. This image is intended for use with Maven-based Java standalone projects that are run via main class, for example Spring Boot applications. If you would not provide any builder during creating new app the type of application is auto-detected by OpenShift, and source code written Java it will be jee deployed on WildFly server. The current version of the Java S2I builder image supports OpenJDK 1.8, Jolokia 1.3.5, and Maven 3.3.9-2.8.
Let’s create our first application on OpenShift. We begin from microservice employee. Under normal circumstances each microservice would be located in separated Git repository. In our sample all of them are placed in the single repository, so we have provide the location of current app by setting parameter --context-dir. We will also override default branch to openshift, which has been created for the purposes of this article.

$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=employee --context-dir=employee-service

All our microservices are connecting to Mongo database, so we also have to inject connection settings and credentials into application pod. It can achieved by injecting mongodb secret to BuildConfig object.

$ oc set env bc/employee --from="secret/mongodb" --prefix=MONGO_

BuildConfig is one of the OpenShift object created after running command oc new-app. It also creates DeploymentConfig with deployment definition, Service, and ImageStream with newest Docker image of application. After creating application a new build is running. First, it download source code from Git repository, then it builds it using Maven, assembles build results into the Docker image, and finally saves image in registry.
Now, we can create the next application – department. For simplification, all three microservices are connecting to the same database, which is not recommended under normal circumstances. In that case the only difference between department and employee app is the environment variable EMPLOYEE_SERVICE set as parameter on oc new-app command.

$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=department --context-dir=department-service -e EMPLOYEE_SERVICE=employee 

The same as before we also inject mongodb secret into BuildConfig object.

$ oc set env bc/department --from="secret/mongodb" --prefix=MONGO_

A build is starting just after creating a new application, but we can also start it manually by executing the following running command.

$ oc start-build department

Finally, we are deploying the last microservice. Here are the appropriate commands.

$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=organization --context-dir=organization-service -e EMPLOYEE_SERVICE=employee -e DEPARTMENT_SERVICE=department
$ oc set env bc/organization --from="secret/mongodb" --prefix=MONGO_

4. Deep look into created OpenShift objects

The list of builds may be displayed on web console under section Builds -> Builds. As you can see on the picture below there are three BuildConfig objects available – each one for the single application. The same list can be displayed using oc command oc get bc.

s2i-4

You can take a look on build history by selecting one of the element from the list. You can also start a new by clicking button Start Build as shown below.

s2i-5

We can always display YAML configuration file with BuildConfig definition. But it is also possible to perform the similar action using web console. The following picture shows the list of environment variables injected from mongodb secret into the BuildConfig object.

s2i-6.PNG

Every build generates Docker image with application and saves it in Minishift internal registry. Minishift internal registry is available under address 172.30.1.1:5000. The list of available image streams is available under section Builds -> Images.

s2i-7

Every application is automatically exposed on ports 8080 (HTTP), 8443 (HTTPS) and 8778 (Jolokia) via services. You can also expose these services outside Minishift by creating OpenShift Route using oc expose command.

s2i-8

5. Testing the sample system

To proceed with the tests we should first expose our microservices outside Minishift. To do that just run the following commands.

$ oc expose svc employee
$ oc expose svc department
$ oc expose svc organization

After that we can access applications on the address http://${APP_NAME}-${PROJ_NAME}.${MINISHIFT_IP}.nip.io as shown below.

s2i-9

Each microservice provides Swagger2 API documentation available on page swagger-ui.html. Thanks to that we can easily test every single endpoint exposed by the service.

s2i-10

It’s worth notice that every application making use of three approaches to inject environment variables into the pod:

  1. It stores version number in source code repository inside the file .s2i/environment. S2I builder reads all the properties defined inside that file and set them as environment variables for builder pod, and then application pod. Our property name is VERSION, which is injected using Spring @Value, and set for Swagger API (the code is visible below).
  2. I have already set the names of dependent services as ENV vars during executing command oc new-app for department and organization apps.
  3. I have also inject MongoDB secret into every BuildConfig object using oc set env command.
@Value("${VERSION}")
String version;

public static void main(String[] args) {
	SpringApplication.run(DepartmentApplication.class, args);
}

@Bean
public Docket swaggerApi() {
	return new Docket(DocumentationType.SWAGGER_2)
		.select()
			.apis(RequestHandlerSelectors.basePackage("pl.piomin.services.department.controller"))
			.paths(PathSelectors.any())
		.build()
		.apiInfo(new ApiInfoBuilder().version(version).title("Department API").description("Documentation Department API v" + version).build());
}

Conclusion

Today I show you that deploying your applications on OpenShift may be very simple thing. You don’t have to create any YAML descriptor files or build Docker images by yourself to run your app. It is built directly from your source code. You can compare it with deployment on Kubernetes described in one of my previous articles Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker.