Micronaut Tutorial: Beans and Scopes

Micronaut is a relatively new JVM-based framework. It is especially designed for building modular, easy testable microservice applications. Micronaut is heavily inspired by Spring and Grails frameworks, which is not a surprise, if we consider it has been developed by the creators of Grails framework. It is based on Java’s annotation processing, IoC (Inversion of Control) and DI (Dependency Injection).

Micronaut implements the JSR-330 (java.inject) specification for dependency injection. It supports constructor injection, field injection, JavaBean and method parameter injection. In this part of tutorial I’m going to give some tips on how to:

  • define and register beans in the application context
  • use built-in scopes
  • inject configuration to your application
  • automatically test your beans during application build with JUnit 5

Prequirements

Before we proceed to the development we need to create sample project with dependencies. Here’s the list of Maven dependencies used in this the application created for this tutorial:

<dependencies>
	<dependency>
		<groupId>io.micronaut</groupId>
		<artifactId>micronaut-inject</artifactId>
	</dependency>
	<dependency>
		<groupId>io.micronaut</groupId>
		<artifactId>micronaut-runtime</artifactId>
	</dependency>
	<dependency>
		<groupId>io.micronaut</groupId>
		<artifactId>micronaut-inject-java</artifactId>
		<scope>provided</scope>
	</dependency>
	<dependency>
		<groupId>ch.qos.logback</groupId>
		<artifactId>logback-classic</artifactId>
		<version>1.2.3</version>
		<scope>runtime</scope>
	</dependency>
	<dependency>
		<groupId>io.micronaut.test</groupId>
		<artifactId>micronaut-test-junit5</artifactId>
		<scope>test</scope>
	</dependency>
</dependencies>

We will use the newest stable version of Micronaut – 1.1.0:

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>io.micronaut</groupId>
			<artifactId>micronaut-bom</artifactId>
			<version>1.1.0</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

The sample application source code is available on Github in the repository https://github.com/piomin/sample-micronaut-applications.git.

Scopes

Micronaut provides 6 built-in scopes for beans. Following JSR-330 additional scopes can be added by defining a @Singleton bean that implements the CustomScope interface. Here’s the list of built-in scopes:

    • Singleton – singleton pattern for bean
    • Prototype – a new instance of the bean is created each time it is injected. It is default scope for bean
    • ThreadLocal – is a custom scope that associates a bean per thread via a ThreadLocal
    • Context – a bean is created at the same time as the ApplicationContext
    • Infrastructure – the @Context bean cannot be replaced
    • Refreshable – a custom scope that allows a bean’s state to be refreshed via the /refresh endpoint
Thread Local

Two of that scopes are really interesting. Let’s begin from @ThreadLocal scope. That’s something what is not available for beans in Spring. We can associate bean with thread using single annotation.
How it works? First, let’s define the bean with @ThreadLocal scope. It holds the single value in the field correlationId. The main function of this bean is to pass the same id between different singleton beans within a single thread. Here’s our sample bean:

@ThreadLocal
public class MiddleService {

    private String correlationId;

    public String getCorrelationId() {
        return correlationId;
    }

    public void setCorrelationId(String correlationId) {
        this.correlationId = correlationId;
    }

}

Every singleton bean injects our bean annotated with @ThreadLocal. There are 2 sample singleton beans defined:

@Singleton
public class BeginService {

    @Inject
    MiddleService service;

    public void start(String correlationId) {
        service.setCorrelationId(correlationId);
    }

}

@Singleton
public class FinishService {

    @Inject
    MiddleService service;

    public String finish() {
        return service.getCorrelationId();
    }

}
Testing

Testing with Micronaut and JUnit 5 is very simple. We have already included micronaut-test-junit5 dependency to our pom.xml. Now, we only have to annotate test class with @MicronautTest.
Here’s our test. I have run 20 threads which uses @ThreadLocal through BeginService and FinishService singletons. Each thread sets randomly generated correlation id and checks if those two singleton beans use the same correlationId.

@MicronautTest
public class ScopesTests {

    @Inject
    BeginService begin;
    @Inject
    FinishService finish;

    @Test
    public void testThreadLocalScope() {
        final Random r = new Random();
        ExecutorService executor = Executors.newFixedThreadPool(10);
        for (int i = 0; i < 20; i++) {
            executor.execute(() -> {
                String correlationId = "abc" + r.nextInt(10000);
                begin.start(correlationId);
                Assertions.assertEquals(correlationId, finish.finish());
            });
        }
        executor.shutdown();
        while (!executor.isTerminated()) {
        }
        System.out.println("Finished all threads");
    }
	
}
Refreshable

The @Refreshable is another interesting scope offered by Micronaut. You can refresh the state of such bean by calling HTTP endpoint /refresh or by publishing RefreshEvent to application context. Because we don’t use HTTP server the second option is a right choice for us. First, let’s define bean with @Refreshable scope. It inject value from configuration property and returns it:

@Refreshable
public class RefreshableService {

    @Property(name = "test.property")
    String testProperty;

    @PostConstruct
    public void init() {
        System.out.println("Property: " + testProperty);
    }

    public String getTestProperty() {
        return testProperty;
    }

}

To test it we should first replace the value of test.property. After injecting ApplicationContext into the test we may add new property source programmatically by calling method addPropertySource. Because this type of property has a higher loading priority than the properties from application.yml it would be overridden. Now, we just need to publish new refresh event to context, and call method from sample bean one more time:

@Inject
ApplicationContext context;
@Inject
RefreshableService refreshable;

@Test
public void testRefreshableScope() {
	String testProperty = refreshable.getTestProperty();
	Assertions.assertEquals("hello", testProperty);
	context.getEnvironment().addPropertySource(PropertySource.of(CollectionUtils.mapOf("test.property", "hi")));
	context.publishEvent(new RefreshEvent());
	try {
		Thread.sleep(1000);
	} catch (InterruptedException e) {
		e.printStackTrace();
	}
	testProperty = refreshable.getTestProperty();
	Assertions.assertEquals("hi", testProperty);
}

Beans

In the previous section we have already been defining simple beans with different scopes. Micronaut provides some more advanced features, that can be used while defining new beans. You can create conditional beans, define replacement for existing beans or different methods of injecting configuration into the bean.

Conditions and Replacements

In order to define conditions for newly created bean we need to annotate it with @Requires. Micronaut offers many possibilities of defining configuration requirements. You will always use the same annotation, but the different field for each option. You can require:
You can require:

  • the presence of one more classes – @Requires(classes=...)
  • the absence of one more classes – @Requires(missing=...)
  • the presence one or more beans – @Requires(beans=...)
  • the absence of one or more beans – @Requires(missingBeans=...)
  • a property with an optional value – @Requires(property="...")
  • a property to not be part of the configuration – @Requires(missingProperty="...")
  • the presence of one of more files in the file system – @Requires(resources="...")
  • the presence of one of more classpath resources – @Requires(resources="...")

And some others. Now, let’s consider the simple sample including some selected conditional strategies. Here’s the class that requires the property test.property to be available in the environment.

@Prototype
@Requires(property = "test.property")
public class TestPropertyRequiredService {

    @Property(name = "test.property")
    String testProperty;

    public String getTestProperty() {
        return testProperty;
    }

}

Here’s another bean definition. It requires that property test.property2 is not available in the environment. The following bean is being replaced by the another bean through annotation @Replaces(bean = TestPropertyRequiredValueService.class).

@Prototype
@Requires(missingProperty = "test.property2")
@Replaces(bean = TestPropertyRequiredValueService.class)
public class TestPropertyNotRequiredService {

    public String getTestProperty() {
        return "None";
    }
    
}

Here’s the last sample bean declaration. There is one interesting option related to conditional beans depended from property. You can require the property to be a certain value, not be a certain value, and use a default in those checks if its not set. Also, the following bean is replacing TestPropertyNotRequiredService bean.

@Prototype
@Requires(property = "test.property", value = "hello", defaultValue = "Hi!")
public class TestPropertyRequiredValueService {

    @Property(name = "test.property")
    String testProperty;

    public String getTestProperty() {
        return testProperty;
    }

}

The result of the following test is predictable:

@Inject
TestPropertyRequiredService service1;
@Inject
TestPropertyNotRequiredService service2;
@Inject
TestPropertyRequiredValueService service3;

@Test
public void testPropertyRequired() {
	String testProperty = service1.getTestProperty();
	Assertions.assertNotNull(testProperty);
	Assertions.assertEquals("hello", testProperty);
}

@Test
public void testPropertyNotRequired() {
	String testProperty = service2.getTestProperty();
	Assertions.assertNotNull(testProperty);
	Assertions.assertEquals("None", testProperty);
}

@Test
public void testPropertyValueRequired() {
	String testProperty = service3.getTestProperty();
	Assertions.assertNotNull(testProperty);
	Assertions.assertEquals("hello", testProperty);
}
Application Configuration

Configuration in Micronaut takes inspiration from both Spring Boot and Grails, integrating configuration properties from multiple sources directly into the core IoC container. Configuration can by default be provided in either Java properties, YAML, JSON or Groovy files. There are 7 levels of priority for property sources (for comparison – Spring Boot provides 17 levels):

  1. Command line arguments
  2. Properties from SPRING_APPLICATION_JSON
  3. Properties from MICRONAUT_APPLICATION_JSON
  4. Java System Properties
  5. OS environment variables
  6. Enviroment-specific properties from application-{environment}.{extension}
  7. Application-specific properties from application.{extension}

One of more interesting option related to Micronaut configuration is @EachProperty and @EachBean. Both these annotations are used for defining multiple instances of bean, each with their own distinct configuration.
In order to show you the sample use case for those annotations, we should first imagine that we are building simple client-side load balancer that connects with multiple instances of the service. The configuration is available under property test.url.* and contains only target URL:

@EachProperty("test.url")
public class ClientConfig {

    private String name;
    private String url;

    public ClientConfig(@Parameter String name) {
        this.name = name;
    }

    public String getUrl() {
        return url;
    }

    public void setUrl(String url) {
        this.url = url;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}

Assuming we have the following configuration properties, Micronaut creates three instances of our configuration under the names: client1, client2 and client3.

test:
  url:
    client1.url: http://localhost:8080
    client2.url: http://localhost:8090
    client3.url: http://localhost:8100

Using @EachProperty annotation was only the first step. We also ClientService responsible for performing interaction with the target service.

public class ClientService {

    private String url;

    public ClientService(String url) {
        this.url = url;
    }

    public String connect() {
        return url;
    }
}

The ClientService is still not registered as a bean, since it is not annotated. Our goal is to inject three beans ClientConfig containing the distinct configuration, and register three instances of ClientService bean. That’s why we will define bean factory with the method annotated with @EachBean. In Micronaut, factory usually allows you to register the bean, which is not a part of your codebase, but it is also useful in that case.

@Factory
public class ClientFactory {

    @EachBean(ClientConfig.class)
    ClientService client(ClientConfig config) {
        String url = config.getUrl();
        return new ClientService(url);
    }
}

Finally, we may proceed to the test. We have injected all the three instances of ClientService. Each of them contains configuration injected from different instance of ClientConfig bean. If you won’t set any qualifier, Micronaut injects the bean containing configuration defined first. For injecting another instances of bean we should use qualifier, which is the name of configuration property.

@Inject
ClientService client;
@Inject
@Named("client2")
ClientService client2;
@Inject
@Named("client3")
ClientService client3;

@Test
public void testClient() {
	String url = client.connect();
	Assertions.assertEquals("http://loalhost:8080", url);
	url = client2.connect();
	Assertions.assertEquals("http://loalhost:8090", url);
	url = client3.connect();
	Assertions.assertEquals("http://loalhost:8100", url);
}
Advertisements

Performance Comparison Between Spring Boot and Micronaut

Today we will compare two frameworks used for building microservices on the JVM: Spring Boot and Micronaut. First of them, Spring Boot is currently the most popular and opinionated framework in the JVM world. On the other side of the barrier is staying Micronaut, quickly gaining popularity framework especially designed for building serverless functions or low memory-footprint microservices. We will be comparing version 2.1.4 of Spring Boot with 1.0.0.RC1 of Micronaut. The comparison criteria are:

  • memory usage (heap and non-heap)
  • the size in MB of generated fat JAR file
  • the application startup time
  • the performance of application, in the meaning of average response time from the REST endpoint during sample load testing

To make our test relevant we will gather the statistics for the two almost identical applications. Of course, the only difference between will be in the frameworks we used for building it. Our sample application is very simple. It exposes some endpoints with in-memory CRUD operations for a single entity. It also exposes info and health endpoints, and also Swagger API with all endpoints auto-generated documentation.

The sample application performance will be tested on JDK 11. We will use Yourkit for profiling and monitoring memory usage after startup and during load testing, and Gatling for building performance API tests. First, let’s perform a short overview of our sample application.

Source Code

I have implemented very simple in-memory repository bean that add new object into the list and provides find method for searching object by id generated during add method.

public class PersonRepository {

    List<Person> persons = new ArrayList<>();

    public Person add(Person person) {
        person.setId(persons.size()+1);
        persons.add(person);
        return person;
    }

    public Person findById(Long id) {
        Optional<Person> person = persons.stream().filter(a -> a.getId().equals(id)).findFirst();
        if (person.isPresent())
            return person.get();
        else
            return null;
    }

    public List<Person> findAll() {
        return persons;
    }

}

The repository bean is injected into controller. Controller exposes two HTTP methods. First of them (POST) is used for adding new object, while the second (GET) for searching it by id. Here’s controller implementation inside Spring Boot application:

@RestController
@RequestMapping("/persons")
public class PersonsController {

    private static final Logger LOGGER = LoggerFactory.getLogger(PersonsController.class);

    @Autowired
    PersonRepository repository;

    @PostMapping
    public Person add(@RequestBody Person person) {
        LOGGER.info("Person add: {}", person);
        return repository.add(person);
    }

    @GetMapping("/{id}")
    public Person findById(@PathVariable("id") Long id) {
        LOGGER.info("Person find: id={}", id);
        return repository.findById(id);
    }

    @GetMapping
    public List<Person> findAll() {
        LOGGER.info("Person find");
        return repository.findAll();
    }

}

Here’s the similar implementation for Micronaut:

To implement REST endpoints, healthcheck and Swagger API we need to include some dependencies. Here’s the list of dependencies for Spring Boot:

<parent>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-parent</artifactId>
	<version>2.1.4.RELEASE</version>
</parent>
<groupId>pl.piomin.services</groupId>
<artifactId>sample-app</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
	<java.version>11</java.version>
	<maven.compiler.source>${java.version}</maven.compiler.source>
	<maven.compiler.target>${java.version}</maven.compiler.target>
</properties>
<dependencies>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-web</artifactId>
	</dependency>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-actuator</artifactId>
	</dependency>
	<dependency>
		<groupId>io.springfox</groupId>
		<artifactId>springfox-swagger2</artifactId>
		<version>2.9.2</version>
	</dependency>
</dependencies>

Here’s the similar list of dependencies required for Micronaut:

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-http-server-netty</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-inject</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-runtime</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-management</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-inject-java</artifactId>
	<scope>provided</scope>
</dependency>
<dependency>
	<groupId>io.swagger.core.v3</groupId>
	<artifactId>swagger-annotations</artifactId>
</dependency>
<dependency>
	<groupId>ch.qos.logback</groupId>
	<artifactId>logback-classic</artifactId>
	<version>1.2.3</version>
	<scope>runtime</scope>
</dependency>

I also had to provide some additional configuration in application.yml to enable Swagger and healthchecks:

micronaut:
  router:
    static-resources:
      swagger:
        paths: classpath:META-INF/swagger
        mapping: /swagger/**
endpoints:
  info:
    enabled: true
    sensitive: false

Starting Application

First, let’s start our application. I use Intellij for that. The sample application built basing on Spring Boot starts around 6-7 seconds. The following start take up exactly 6.344.

performance-1

The similar application built on top of Micronaut starts around 3-4 seconds. The following start take up exactly 3.463 as shown below. However, I had to disable environment deduction when I started application behind corporate proxy by setting VM option -Dmicronaut.cloud.platform=BARE_METAL. I think that startup time for both applications is really ok.

performance-7

Here’s the graph that illustrates difference in startup time between Spring Boot and Micronaut.

performance-sum-1

Building Application

We will also check out the size of application fat JAR. To do that you should build the application using mvn clean install command. For Spring Boot we used two standard starters: Web, Actuator, and library Swagger SpringFox. As a result of this there are more than 50 libraries included. Of course, we could made some exclusions or do not use starters, but I have chosen the simplest way to built an application. The fat JAR has a size of 24.2 MB.
The similar application based on Micronaut is much smaller. The fat JAR has a size of 12.1 MB. I have included more libraries in pom.xml, and finally there were 37 libraries included.
Spring Boot includes more libraries on the standard configuration, but on the other hand it has more features and auto-configuration than Micronaut.

Here’s the graph that illustrates difference in size of target JAR between Spring Boot and Micronaut.

performance-sum-2

Memory Management

Just after startup Spring Boot application has allocated 305 MB for heap and 81 MB for non-heap. I haven’t set any memory limit using Xmx or any other option. In heap, 8 MB has been consumed by old gen, 60 MB by eden space, and 15 MB by survivor. Most of non-heap were consumed by metaspace – 52 MB.
After running performance load test heap allocation increased to 369 MB, and non-heap to 87 MB. Here’s the screen that illustrates CPU and RAM usage before and during performance test.

performance-2

Just after startup Micronaut application has allocated 254 MB for heap and 51 MB for non-heap. I haven’t set any memory limit using Xmx or any other option – the same as for Spring Boot application. In heap, 2.5 MB has been consumed by old gen, 20 MB by eden space, and 7 MB by survivor. Most of non-heap were consumed by metaspace – 35 MB.
After running performance load test heap allocation has not changed, and non-heap increased to 63 MB. Here’s the screen that illustrates CPU and RAM usage before and during performance test.

performance-8

Here’s heap memory usage comparison between Spring Boot and Micronaut just after startup.

performance-sum-3

And non-heap.

performance-sum-4

Performance Tests

I used Gatling for building performance load tests. This tool allows you to create test scenarios in Scala. We are generating 40k sample requests sent simultaneously by 20 threads. Here’s the test implemented for POST method.

class SimpleTest extends Simulation {

  val scn = scenario("AddPerson").repeat(2000, "n") {
    exec(http("Persons-POST")
      .post("http://localhost:8080/persons")
      .header("Content-Type", "application/json")
      .body(StringBody("""{"name":"Test${n}","gender":"MALE","age":100}"""))
      .check(status.is(200)))
  }

  setUp(scn.inject(atOnceUsers(20))).maxDuration(FiniteDuration.apply(10, TimeUnit.MINUTES))

}

Here’s the test implemented for GET method.

class SimpleTest2 extends Simulation {

  val scn = scenario("GetPerson").repeat(2000, "n") {
    exec(http("Persons-GET")
      .get("http://localhost:8080/persons/${n}")
      .check(status.is(200)))
  }

  setUp(scn.inject(atOnceUsers(20))).maxDuration(FiniteDuration.apply(10, TimeUnit.MINUTES))

}

The result of performance test for POST /persons method is visible on the picture below. An average number of requests processed during one second is 1176.

performance-3

The following screen shows the histogram with response time percentiles over time.

performance-5

The result of performance test for GET /persons/{id} method is visible on the picture below. An average number of requests processed during one second is 1428.

performance-4

The following screen shows the histogram with response time percentiles over time.

performance-6

Now, we are running the same Gatling load test for Micronaut application. The result of performance test for POST /persons method is visible on the picture below. An average number of requests processed during one second is 1290.

performance-9

The following screen shows the histogram with response time percentiles over time.

performance-11

The result of performance test for GET /persons/{id} method is visible on the picture below. An average number of requests processed during one second is 1538.

performance-10

The following screen shows the histogram with response time percentiles over time.

performance-12

There are no big difference in processing time between Spring Boot and Micronaut. It’s possible that small differences in time are not related with framework, but rather with base server. By default, Spring Boot uses Tomcat, while Micronaut uses Netty.

The Future of Spring Cloud Microservices After Netflix Era

If somebody would ask you about Spring Cloud, the first thing that comes into your mind will probably be Netflix OSS support. Support for such tools like Eureka, Zuul or Ribbon is provided not only by Spring, but also by some other popular frameworks used for building microservices architecture like Apache Camel, Vert.x or Micronaut. Currently, Spring Cloud Netflix is the most popular project being a part of Spring Cloud. It has around 3.2k stars on GitHub, while the second best has around 1.4k. Therefore, it is quite surprising that Pivotal has announced that most of Spring Cloud Netflix modules are entering maintenance mode. You can read more about in the post published on the Spring blog by Spencer Gibb https://spring.io/blog/2018/12/12/spring-cloud-greenwich-rc1-available-now.
Ok, let’s perform a short summary of that changes. Starting from Spring Cloud Greenwich Release Train Netflix OSS Archaius, Hystrix, Ribbon and Zuul are entering maintenance mode. It means that there won’t be any new features to these modules, and Spring Cloud team will perform only some bug fixes and fix security issues. The maintenance mode does not include Eureka module, which is still supported.
The explanation of these changes is pretty easy. Especially for two of them. Currently, Ribbon and Hystrix are not actively developed by Netflix, although they are still deployed at scale. Additionally, Hystrix has been already superseded by the new solution for telemetry called Atlas. The situation with Zuul is not such obvious. Netflix has announced open sourcing of Zuul 2 on May 2018. New version of Zuul gateway is built on top of Netty server, and includes some improvements and new features. You can read more about them on Netflix blog https://medium.com/netflix-techblog/open-sourcing-zuul-2-82ea476cb2b3. Despite that decision taken by Netflix cloud team, Spring Cloud team has abandoned development of Zuul module. I can only guess that it was caused by the earlier decision of starting new module inside Spring Cloud family dedicated especially for being an API gateway in the microservices-based architecture – Spring Cloud Gateway.
The last piece of that puzzle is Eureka – a discovery server. It is still developed, but the situation is also interesting here. I will describe that in the next part of this article.
All these news have inspired me to take a look on the current situation of Spring Cloud and discuss some potential changes in the future. As an author of Mastering Spring Cloud book I’m trying to follow an evolution of that project to stay current. It’s also worth mentioning that we are have microservices inside my organization – of course built on top of Spring Boot and Spring Cloud using such modules like Eureka, Zuul and Ribbon. In this article, I would like to discuss some potential … for such popular microservices patterns like service discovery, distributed configuration, client-side load balancing and API gateway.

Service Discovery

Eureka is the only one important Spring Cloud Netflix module that has not been moved to the maintenance mode. However, I would not say that it is actively developed. The last commit in the repository maintained by Netflix is from 11th January. Some time ago they have started working on Eureka 2, but it seems these works has been abandoned or they just have postponed open sourcing the newest version code to the future. Here https://github.com/Netflix/eureka/tree/2.x you can find an interesting comment about it: “The 2.x branch is currently frozen as we have had some internal changes w.r.t. to eureka2, and do not have any time lines for open sourcing of the new changes.”. So, we have two possibilities. Maybe, Netflix will decide to open source those internal changes as a version 2 of Eureka server. It is worth to remember that Eureka is a battle proven solution used at Scale by Netflix directly, and probably by many other organizations through Spring Cloud.
The second option is to choose another discovery server. Currently, Spring Cloud supports discovery based on various tools: ZooKeeper, Consul, Alibaba Nacos, Kubernetes. In fact, Kubernetes is based on etcd. Support for etcd is also being developed by Spring Cloud, but it is still in the incubation stage, and it is not known if it will be ever promoted to the official release train. In my opinion, there one leader amongst these solutions – HashiCorp’s Consul.
Consul is now described as a service mesh solution providing a full featured control plane with service discovery, configuration, and segmentation functionality. It can be used as a discovery server or a key/value store in your microservices-based architecture. The integration with Consul is implemented by Spring Cloud Consul project. To enable Consul client for your application you just need to include the following dependency to your Maven pom.xml:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-consul-discovery</artifactId>
</dependency>

By default, Spring tries to connect with Consul on the address localhost:8500. If you need to override this address you should set the appropriate properties inside application.yml:

spring:  
  cloud:
    consul:
      host: 192.168.99.100
      port: 8500

You can easily test this solution with local instance of Consul started as the Docker container:

$ docker run -d --name consul -p 8500:8500 consul

As you see Consul discovery implementation with Spring Cloud is very easy – the same as for Eureka. Consul has one undoubted advantage over Eureka – it is continuously maintained and developed by HashiCorp. Its popularity is growing fast. It is a part of biggest HashiCorp ecosystem, which includes Vault, Nomad and Terraform. In contrast to Eureka, Consul can be used not only for service discovery, but also as a configuration server in your microservices-based architecture.

Distributed Configuration

Netflix Archaius is an interesting solution for managing externalized configuration in microservices architecture. Although it offers some interesting features like dynamic and typed properties or support for dynamic data sources such as URLs, JDBC or AWS DynamoDB, Spring Cloud has also decided to move it to the maintenance mode. However, a popularity of Spring Cloud Archaius was limited, due to existence of similar project fully created by Pivotal team and community – Spring Cloud Config. Spring Cloud Config supports multiple source repositories including Git, JDBC, Vault or simple files. You can find many examples of using this project for providing distributed configuration for your microservices in my previous posts. Today, I’m not going to talk about it. We will discuss an alternative solution – also supported by Spring Cloud.
As I have mentioned in the end of previous section Consul can also be used as a configuration server. If you use Eureka as a discovery server, using Spring Cloud Config as a configuration server is a natural choice, because Eureka simply does not provide such features. This is not the case if you decide to use Consul. Now it makes sense to choose between two solutions: Spring Cloud Consul Config and Spring Cloud Config. Of course, both of them have their advantages and disadvantages. For example, you can easily build a cluster with Consul nodes, while with Spring Cloud Config you must rely on external discovery.
Now, let’s see how to use Spring Cloud Consul for managing external configuration in your application. To enable it on the application side you just need to include the following dependency to your Maven pom.xml:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-consul-config</artifactId>
</dependency>

The same as for service discovery, If you would like to override some default client settings you need to set properties spring.cloud.consul.*. However, such a configuration must provided inside bootstrap.yml.

spring:  
  application:
    name: callme-service
  cloud:
    consul:
      host: 192.168.99.100
      port: 8500

The name of property source created on Consul should be the same as the application name provided in bootstrap.yml inside config folder. You should create key server.port with value 0, to force Spring Boot to generate listening port number randomly. Supposing you need to set application default listening port you should the following configuration.

spring-cloud-1

When enabling dynamic port number generation you also need to override application instance id to be unique across a single machine. These feature is required if you are running multiple instances of a single service in the same machine. We will do it for callme-service, so we need to override the the property spring.cloud.consul.discovery.instance-id with our value as shown below.

spring-cloud-4

Then, you should see the following log on your application startup.

spring-cloud-3

API Gateway

The successor of Spring Cloud Netflix Zuul is Spring Cloud Gateway. This project has been started around two years ago, and now is the second most popular Spring Cloud project with 1.4k stars on GitHub. It provides an API Gateway built on top of the Spring Ecosystem, including: Spring 5, Spring Boot 2 and Project Reactor. It is running on Netty, and does not work with traditional servlet container like Tomcat or Jetty. It allows to define routes, predicates and filters.
API gateway, the same as every Spring Cloud microservice may be easily integrated with service discovery based on Consul. We just need to include the appropriate dependencies inside pom.xml. We will use the latest development version of Spring Cloud libraries – 2.2.0.BUILD-SNAPSHOT. Here’s the list of required dependencies:

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-consul-discovery</artifactId>
	<version>2.2.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-consul-config</artifactId>
	<version>2.2.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-gateway</artifactId>
	<version>2.2.0.BUILD-SNAPSHOT</version>
</dependency>

The gateway configuration will also be served by Consul. Because, we have pretty more configuration settings than for sample microservices, we will store it as YAML file. To achieve that we should create YAML file available under path /config/gateway-service/data on Consul Key/Value. The configuration visible below enables service discovery integration and defines routes to the downstream services. Each route contains name of the target service under which it is registered in service discovery, matching path and rewrite path used for call endpoint exposed by the downstream service. The following configuration is load on startup by our API gateway:

spring:
  cloud:
    gateway:
      discovery:
        locator:
          enabled: true
      routes:
        - id: caller-service
          uri: lb://caller-service
          predicates:
            - Path=/caller/**
          filters:
            - RewritePath=/caller/(?.*), /$\{path}
        - id: callme-service
          uri: lb://callme-service
          predicates:
            - Path=/callme/**
          filters:
            - RewritePath=/callme/(?.*), /$\{path}

Here’s the same configuration visible on Consul.

spring-cloud-2

The last step is to force gateway-service to read configuration stored as YAML. To do that we need to set property spring.cloud.consul.config.format to YAML. Here’s the full configuration provided inside bootstrap.yml.

spring:
  application:
    name: gateway-service
  cloud:
    consul:
      host: 192.168.99.100
      config:
        format: YAML

Client-side Load Balancer

In version 2.2.0.BUILD-SNAPSHOT of Spring Cloud Commons Ribbon is still the main auto-configured load balancer for HTTP clients. Although Spring Cloud team has announced that Spring Cloud Load Balancer will be the successor of Ribbon, we currently won’t find many informations about that project in documentation and on the web. We may expect that the same as for Netflix Ribbon any configuration will be transparent for us, especially if we use discovery client. Currently, spring-cloud-loadbalancer module is a part of Spring Cloud Commons project. You may include it directly to your application by declaring the following dependency in pom.xml:

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-loadbalancer</artifactId>
	<version>2.2.0.BUILD-SNAPSHOT</version>
</dependency>

For the test purposes it is worth to exclude some Netflix modules included together with <code>spring-cloud-starter-consul-discovery</code> starter. Now, we are sure that Ribbon is not used in background as load balancer. Here’s the list of exclusions I set for my sample application:

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-consul-discovery</artifactId>
	<version>2.2.0.BUILD-SNAPSHOT</version>
	<exclusions>
		<exclusion>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-netflix-core</artifactId>
		</exclusion>
		<exclusion>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-starter-netflix-archaius</artifactId>
		</exclusion>
		<exclusion>
			<groupId>com.netflix.ribbon</groupId>
			<artifactId>ribbon</artifactId>
		</exclusion>
		<exclusion>
			<groupId>com.netflix.ribbon</groupId>
			<artifactId>ribbon-core</artifactId>
		</exclusion>
		<exclusion>
			<groupId>com.netflix.ribbon</groupId>
			<artifactId>ribbon-httpclient</artifactId>
		</exclusion>
		<exclusion>
			<groupId>com.netflix.ribbon</groupId>
			<artifactId>ribbon-loadbalancer</artifactId>
		</exclusion>
	</exclusions>
</dependency>

Treat my example just as a playground. Certainly the targeted approach is going to be much easier. First, we should annotate our main or configuration class with @LoadBalancerClient. As always, the name of client should be same as the name of target service registered in registry. The annotation should also contain the class with client configuration.

@SpringBootApplication
@LoadBalancerClients({
	@LoadBalancerClient(name = "callme-service", configuration = ClientConfiguration.class)
})
public class CallerApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallerApplication.class, args);
	}

	@Bean
	RestTemplate template() {
		return new RestTemplate();
	}

}

Here’s our load balancer configuration class. It contains the declaration of a single @Bean. I have chosen RoundRobinLoadBalancer type.

public class ClientConfiguration {

	@Bean
	public RoundRobinLoadBalancer roundRobinContextLoadBalancer(LoadBalancerClientFactory clientFactory, Environment env) {
		String serviceId = clientFactory.getName(env);
		return new RoundRobinLoadBalancer(serviceId, clientFactory
				.getLazyProvider(serviceId, ServiceInstanceSupplier.class), -1);
	}

}

Finally, here’s the implementation of caller-service controller. It uses LoadBalancerClientFactory directly to find list of available instances of callme-service. Then it selects a single instance, get its host and port, and sets in as an target URL.

@RestController
@RequestMapping("/caller")
public class CallerController {

	@Autowired
	Environment environment;
	@Autowired
	RestTemplate template;
	@Autowired
	LoadBalancerClientFactory clientFactory;

	@GetMapping
	public String call() {
		RoundRobinLoadBalancer lb = clientFactory.getInstance("callme-service", RoundRobinLoadBalancer.class);
		ServiceInstance instance = lb.choose().block().getServer();
		String url = "http://" + instance.getHost() + ":" + instance.getPort() + "/callme";
		String callmeResponse = template.getForObject(url, String.class);
		return "I'm Caller running on port " + environment.getProperty("local.server.port")
				+ " calling-> " + callmeResponse;
	}

}

Summary

The following picture illustrates the architecture of sample system. We have two instances of callme-service, a single instance of caller-service, which uses Spring Cloud Balancer to find the list of available instances of callme-service. The ports are generated dynamically. The API gateway is hiding the complexity of our system from external client. It is available on port 8080, and is forwarding requests to the downstream basing on request context path.

spring-cloud-1.png

After starting, all the microservices you should be registered on your Consul node.

spring-cloud-7

Now, you can try to endpoint exposed by caller-service through gateway: http://localhost:8080/caller. You should something like that:

spring-cloud-6

The sample application source code is available on GitHub in repository https://github.com/piomin/sample-spring-cloud-microservices-future.git.

Elasticsearch with Spring Boot

Elasticsearch is a full-text search engine especially designed for working with large data sets. Following this description it is a natural choice to use it for storing and searching application logs. Together with Logstash and Kibana it is a part of powerful solution called Elastic Stack, that has already been described in some of my previous articles.
Keeping application logs is not the only one use case for Elasticsearch. It is often used as a secondary database for the application, that has primary relational database. Such an approach can be especially useful if you have to perform full-text search over large data set or just store many historical records that are no longer modified by the application. Of course there is always question about advantages and disadvantages of that approach.
When you are working with two different data sources that contain the same data, you have to first think about synchronization. You have several options. Depending on the relational database vendor, you can leverage binary or transaction logs, which contain the history of SQL updates. This approach requires some middleware that reads logs and then puts data to Elasticsearch. You can always move the whole responsibility to the database side (trigger) or into Elasticsearch side (JDBC plugins).
No matter how you will import your data into Elasticsearch, you have to consider another problem. The data structure. You probably have data distributed between few tables in your relational database. If you would like to take an advantage of Elasticsearch you should store it as a single type. It forces you to keep redundant data, what results in larger disc space usage. Of course that effect is acceptable if the queries would work faster than equivalent queries in relational database.
Ok, let’s proceed to the example after that long introduction. Spring Boot provides an easy way to interact with Elasticsearch through Spring Data repositories.

1. Enabling Elasticsearch support

As is customary with Spring Boot we don’t have to provide provide any additional beans in the context to enable support for Elasticsearch. We just need to include the following dependency to our pom.xml:

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>

By default, application tries to connect with Elasticsearch on localhost. If we use another target URL we need to override it in configuration settings. Here’s the fragment of our application.yml file that overrides default cluster name and address to the address of Elasticsearch started on Docker container:

spring:
  data:
    elasticsearch:
      cluster-name: docker-cluster
      cluster-nodes: 192.168.99.100:9300

The health status of Elasticsearch connection may be exposed by the application through Spring Boot Actuator health endpoint. First, you need to include the following Maven dependency:

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

Healthcheck is enabled by default, and Elasticsearch check is auto-configured. However, this verification is performed via Elasticsearch Rest API client. In that case, we need to override property spring.elasticsearch.rest.uris responsible for setting address used by REST client:

spring:
  elasticsearch:
    rest:
      uris: http://192.168.99.100:9200

2. Running Elasticsearch

For our tests we need single node Elasticsearch instance running in development mode. As usual we will use Docker container. Here’s the command that starts Docker container and exposes it on ports 9200 and 9300.

$ docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.6.2

3. Building Spring Data Repositories

To enable Elasticsearch repositories we just need to annotate the main or configuration class with @EnableElasticsearchRepositories:

@SpringBootApplication
@EnableElasticsearchRepositories
public class SampleApplication { ... }

The next step is to create repository interface that extends CrudRepository. It provides some basic operations like save or findById. If you would like to have some additional find methods you should define new methods inside interface following Spring Data naming convention.

public interface EmployeeRepository extends CrudRepository<Employee, Long> {

    List<Employee> findByOrganizationName(String name);
    List<Employee> findByName(String name);

}

4. Building Document

Our relational structure of entities is flattened into the single Employee object that contains related objects (Organization, Department). You can compare this approach to creating view for group of related tables in RDBMS. In Spring Data Elasticsearch nomenclature a single object is stored as a document. So, you need annotate your object with @Document. You should also set the name of Elasticsearch target index, type and id. Additional mappings can be configured with @Field annotation.

@Document(indexName = "sample", type = "employee")
public class Employee {

    @Id
    private Long id;
    @Field(type = FieldType.Object)
    private Organization organization;
    @Field(type = FieldType.Object)
    private Department department;
    private String name;
    private int age;
    private String position;
	
    // Getters and Setters ...

}

5. Initial import

As I have mentioned in the preface the main reason you may decide to use Elasticsearch is need for working with large data. Therefore it is desirable to fill our test Elasticsearch node with many documents. If you would like to insert many documents in one step you should definitely use Bulk API. The bulk API makes it possible to perform many index/delete operations in a single API call. This can greatly increase the indexing speed.
The bulk operations may be performed with Spring Data ElasticsearchTemplate bean. It is also auto-configured on Spring Boot. Template provides bulkIndex method that takes a list of index queries as input parameter. Here’s the implementation of bean that insert sample test data on application startup:

public class SampleDataSet {

    private static final Logger LOGGER = LoggerFactory.getLogger(SampleDataSet.class);
    private static final String INDEX_NAME = "sample";
    private static final String INDEX_TYPE = "employee";

    @Autowired
    EmployeeRepository repository;
    @Autowired
    ElasticsearchTemplate template;

    @PostConstruct
    public void init() {
        for (int i = 0; i < 10000; i++) {
            bulk(i);
        }
    }

    public void bulk(int ii) {
        try {
            if (!template.indexExists(INDEX_NAME)) {
                template.createIndex(INDEX_NAME);
            }
            ObjectMapper mapper = new ObjectMapper();
            List<IndexQuery> queries = new ArrayList<>();
            List<Employee> employees = employees();
            for (Employee employee : employees) {
                IndexQuery indexQuery = new IndexQuery();
                indexQuery.setId(employee.getId().toString());
                indexQuery.setSource(mapper.writeValueAsString(employee));
                indexQuery.setIndexName(INDEX_NAME);
                indexQuery.setType(INDEX_TYPE);
                queries.add(indexQuery);
            }
            if (queries.size() > 0) {
                template.bulkIndex(queries);
            }
            template.refresh(INDEX_NAME);
            LOGGER.info("BulkIndex completed: {}", ii);
        } catch (Exception e) {
            LOGGER.error("Error bulk index", e);
        }
    }
	
	// sample data set implementation ...
	
}

If you don’t need to insert data on startup you can disable that process by setting property initial-import.enabled to false. Here’s declaration of SampleDataSet bean:

@Bean
@ConditionalOnProperty("initial-import.enabled")
public SampleDataSet dataSet() {
	return new SampleDataSet();
}

6. Viewing data and running queries

Assuming that you have already started the sample application, the bean responsible for bulking index were not disabled, and you were enough patience to wait some hours until all data has been inserted into your Elasticsearch node, now it contains 100M documents of employee type. It is worth to display some information about your cluster. You can do it using Elasticsearch queries or you can download one of available GUI tools, for example ElasticHQ. Fortunately, ElasticHQ is also available as a Docker container. You have to execute the following command to start container with ElasticHQ:

$ docker run -d --name elastichq -p 5000:5000 elastichq/elasticsearch-hq

After starting ElasticHQ GUI can be accessed via web browser on port 5000. Its web console provides basic information about cluster, index and allows to perform queries. You only need to put Elasticsearch node address and you will be redirected into the main dashboard with statistics. Here’s main dashboard of ElasticHQ.

elastic-3

As you can see we have a single index called sample divided into 5 shards. That is the default value provided by Spring Data @Document, which can be overridden with field shards. We can navigate to index management panel after clicking on it. You can perform some operations on index like clear cache or refresh index. You can also take a look on statistics for all shards.

elastic-4

For the current test purposes, I have around 25M (around ~3GB of space) documents of Employee type. We can execute some test queries. I have exposed two endpoints for searching: by employee name GET /employees/{name} and by organization name GET /employees/organization/{organizationName}. The results are not overwhelming. I think we could have the same results for relational database using the same amount of data.

elastic-2

7. Testing

Ok, we have already finished development and performed some manual tests on the large data set. Now, it’s a time to create some integration tests running on built time. We can use the library that allows to automatically start Docker containers with databases during JUnit tests – Testcontainers. For more about this library you may refer to its site https://www.testcontainers.org or to one of my previous articles: Testing Spring Boot Integration with Vault and Postgres using Testcontainers Framework. Fortunately, Testcontainers supports Elasticsearch. To enable it on test scope you first need to include the following dependency to your pom.xml:

<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>elasticsearch</artifactId>
	<version>1.11.1</version>
	<scope>test</scope>
</dependency>

The next step is to define @ClassRule or @Rule bean that points to Elasticsearch container. It is automatically started before test class or before each depending on the annotation you use. The exposed port number is generated automatically so you need to retrieve it set as value for spring.data.elasticsearch.cluster-nodes property. Here’s the full implementation of our JUnit integration test:

@RunWith(SpringRunner.class)
@SpringBootTest
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class EmployeeRepositoryTest {

    @ClassRule
    public static ElasticsearchContainer container = new ElasticsearchContainer();
    @Autowired
    EmployeeRepository repository;

    @BeforeClass
    public static void before() {
        System.setProperty("spring.data.elasticsearch.cluster-nodes", container.getContainerIpAddress() + ":" + container.getMappedPort(9300));
    }

    @Test
    public void testAdd() {
        Employee employee = new Employee();
        employee.setId(1L);
        employee.setName("John Smith");
        employee.setAge(33);
        employee.setPosition("Developer");
        employee.setDepartment(new Department(1L, "TestD"));
        employee.setOrganization(new Organization(1L, "TestO", "Test Street No. 1"));
        employee = repository.save(employee);
        Assert.assertNotNull(employee);
    }

    @Test
    public void testFindAll() {
        Iterable<Employee> employees = repository.findAll();
        Assert.assertTrue(employees.iterator().hasNext());
    }

    @Test
    public void testFindByOrganization() {
        List<Employee> employees = repository.findByOrganizationName("TestO");
        Assert.assertTrue(employees.size() > 0);
    }

    @Test
    public void testFindByName() {
        List<Employee> employees = repository.findByName("John Smith");
        Assert.assertTrue(employees.size() > 0);
    }

}

Summary

In this article you have learned how to:

  • Run your local instance of Elasticsearch with Docker
  • Integrate Spring Boot application with Elasticsearch
  • Use Spring Data Repositories for saving data and performing simple queries
  • User Spring Data ElasticsearchTemplate to perform bulk operations on index
  • Use ElasticHQ for monitoring your cluster
  • Build automatic integration tests for Elasticsearch with Testcontainers

The sample application source code is as usual available on GitHub in repository sample-spring-elasticsearch.

Redis in Microservices Architecture

Redis can be widely used in microservices architecture. It is probably one of the few popular software solutions that may be leveraged by your application in such many different ways. Depending on the requirements it can acts as a primary database, cache, message broker. While it is also a key/value store we can use it as a configuration server or discovery server in your microservices architecture. Although it is usually defined as an in-memory data structure, we can also run it in persistent mode.
Today, I’m going to show you some examples of using Redis with microservices built on top of Spring Boot and Spring Cloud frameworks. These application will communicate between each other asynchronously using Redis Pub/Sub, using Redis as a cache or primary database, and finally used Redis as a configuration server. Here’s the picture that illustrates described architecture.

redis-micro-2.png

Redis as Configuration Server

If have already built microservices with Spring Cloud, you probably have a touch with Spring Cloud Config. It is responsible for providing distributed configuration pattern for microservices. Unfortunately Spring Cloud Config does not support Redis as a property sources backend repository. That’s why I decided to fork Spring Cloud Config project and implement this feature. I hope my implementation will soon be included into official Spring Cloud release, but for now you may use my forked repo to run it. It is available on my GitHub account piomin/spring-cloud-config. How to use it? Very simple. Let’s see.
The current SNAPSHOT version of Spring Boot is 2.2.0.BUILD-SNAPSHOT, the same as for Spring Cloud Config. While building Spring Cloud Config Server we need to include only those two dependencies as shown below.

<parent>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-parent</artifactId>
	<version>2.2.0.BUILD-SNAPSHOT</version>
</parent>
<artifactId>config-service</artifactId>
<groupId>pl.piomin.services</groupId>
<version>1.0-SNAPSHOT</version>

<dependencies>
	<dependency>
		<groupId>org.springframework.cloud</groupId>
		<artifactId>spring-cloud-config-server</artifactId>
		<version>2.2.0.BUILD-SNAPSHOT</version>
	</dependency>
</dependencies>

By default, Spring Cloud Config Server uses Git repository backend. We need to activate redis profile to force it using Redis as a backend. If your Redis instance listens on another address than localhost:6379 you need to overwrite auto-configured connection settings with spring.redis.* properties. Here’s our bootstrap.yml file.

spring:
  application:
    name: config-service
  profiles:
    active: redis
  redis:
    host: 192.168.99.100

The application main class should be annotated with @EnableConfigServer.

@SpringBootApplication
@EnableConfigServer
public class ConfigApplication {

	public static void main(String[] args) {
		new SpringApplicationBuilder(ConfigApplication.class).run(args);
	}

}

Before running the application we need to start Redis instance. Here’s the command that runs it as a Docker container and exposes on port 6379.

$ docker run -d --name redis -p 6379:6379 redis

The configuration for every application has to be available under the key ${spring.application.name} or ${spring.application.name}-${spring.profiles.active[n]}.
We have to create hash with the keys corresponding to the names of configuration properties. Our sample application driver-management uses three configuration properties: server.port for setting HTTP listening port, spring.redis.host for changing default Redis address used as a message broker and database, and sample.topic.name for setting name of topic used for asynchronous communication between our microservices. Here’s the structure of Redis hash created for driver-management visualized with RDBTools.

redis-micro-3

That visualization is an equivalent of running Redis CLI command HGETALL that return all the fields and values in a hash.

>> HGETALL driver-management
{
  "server.port": "8100",
  "sample.topic.name": "trips",
  "spring.redis.host": "192.168.99.100"
}

After setting keys and values in Redis and running Spring Cloud Config Server with active redis profile, we need to enable distributed configuration feature on the client side. To do that we just need include spring-cloud-starter-config dependency to pom.xml of every microservice.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-config</artifactId>
</dependency>

We use the newest stable version of Spring Cloud.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Greenwich.SR1</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

The name of application is taken from property spring.application.name on startup, so we need to provide the following bootstrap.yml file.

spring:
  application:
    name: driver-management

Redis as Message Broker

Now we can proceed to the second use case of Redis in microservices-based architecture – message broker. We will implement a typical asynchronous system, which is visible on the picture below. Microservice trip-management send notification to Redis Pub/Sub after creating new trip and after finishing current trip. The notification is received by both driver-management and passenger-management, which are subscribed to the particular channel.

micro-redis-1.png

Our application is very simple. We just need to add the following dependencies in order to provide REST API and integrate with Redis Pub/Sub.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

We should register bean with channel name and publisher. TripPublisher is responsible for sending messages to the target topic.

@Configuration
public class TripConfiguration {

	@Autowired
	RedisTemplate<?, ?> redisTemplate;

	@Bean
	TripPublisher redisPublisher() {
		return new TripPublisher(redisTemplate, topic());
	}

	@Bean
	ChannelTopic topic() {
		return new ChannelTopic("trips");
	}

}

TripPublisher uses RedisTemplate for sending messages to the topic. Before sending it converts every message from object to JSON string using Jackson2JsonRedisSerializer.

public class TripPublisher {

	private static final Logger LOGGER = LoggerFactory.getLogger(TripPublisher.class);

	RedisTemplate<?, ?> redisTemplate;
	ChannelTopic topic;

	public TripPublisher(RedisTemplate<?, ?> redisTemplate, ChannelTopic topic) {
		this.redisTemplate = redisTemplate;
		this.redisTemplate.setValueSerializer(new Jackson2JsonRedisSerializer(Trip.class));
		this.topic = topic;
	}

	public void publish(Trip trip) throws JsonProcessingException {
		LOGGER.info("Sending: {}", trip);
		redisTemplate.convertAndSend(topic.getTopic(), trip);
	}

}

We have already implemented the logic on the publisher side. Now, we can proceed to the implementation on subscriber sides. We have two microservices driver-management and passenger-management that listens for the notifications sent by trip-management microservice. We need to define RedisMessageListenerContainer bean and set message listener implementation class.

@Configuration
public class DriverConfiguration {

	@Autowired
	RedisConnectionFactory redisConnectionFactory;

	@Bean
	RedisMessageListenerContainer container() {
		RedisMessageListenerContainer container = new RedisMessageListenerContainer();
		container.addMessageListener(messageListener(), topic());
		container.setConnectionFactory(redisConnectionFactory);
		return container;
	}

	@Bean
	MessageListenerAdapter messageListener() {
		return new MessageListenerAdapter(new DriverSubscriber());
	}

	@Bean
	ChannelTopic topic() {
		return new ChannelTopic("trips");
	}

}

The class responsible for handling incoming notification needs to implement MessageListener interface. After receiving message DriverSubscriber deserializes it from JSON to object and change driver status.

@Service
public class DriverSubscriber implements MessageListener {

	private final Logger LOGGER = LoggerFactory.getLogger(DriverSubscriber.class);

	@Autowired
	DriverRepository repository;
	ObjectMapper mapper = new ObjectMapper();

	@Override
	public void onMessage(Message message, byte[] bytes) {
		try {
			Trip trip = mapper.readValue(message.getBody(), Trip.class);
			LOGGER.info("Message received: {}", trip.toString());
			Optional<Driver> optDriver = repository.findById(trip.getDriverId());
			if (optDriver.isPresent()) {
				Driver driver = optDriver.get();
				if (trip.getStatus() == TripStatus.DONE)
					driver.setStatus(DriverStatus.WAITING);
				else
					driver.setStatus(DriverStatus.BUSY);
				repository.save(driver);
			}
		} catch (IOException e) {
			LOGGER.error("Error reading message", e);
		}
	}

}

Redis as Primary Database

Although the main purpose of using Redis is in-memory caching or key/value store it may also act as a primary database for your application. In that case it is worth to run Redis in persistent mode.

$ docker run -d --name redis -p 6379:6379 redis redis-server --appendonly yes

Entities are stored inside Redis using hash operations and mmap structure. Each entity needs to have a hash key and id.

@RedisHash("driver")
public class Driver {

	@Id
	private Long id;
	private String name;
	@GeoIndexed
	private Point location;
	private DriverStatus status;

	// setters and getters ...
}

Fortunately, Spring Data Redis provides well-known repositories pattern for Redis integration. To enable it we should annotate configuration or main class with @EnableRedisRepositories. When using Spring repositories pattern we don’t have to build any queries to Redis by ourselves.

@Configuration
@EnableRedisRepositories
public class DriverConfiguration {
	// logic ...
}

With Spring Data repositories we don’t have build any Redis queries, but just name methods following Spring Data convention. For more details, you may refer to my previous article Introduction to Spring Data Redis. For our sample purposes we can use default methods implemented inside Spring Data. Here’s declaration of repository interface in driver-management.

public interface DriverRepository extends CrudRepository<Driver, Long> {}

Don’t forget to enable Spring Data repositories by annotating the main application class or configuration class with @EnableRedisRepositories.

@Configuration
@EnableRedisRepositories
public class DriverConfiguration {
	...
}

Conclusion

As I have mentioned in the preface there are various use cases for Redis in microservices architecture. I have just presented how you can easily use it together with Spring Cloud and Spring Data to provide configuration server, message broker and database. Redis is commonly considered as a cache, but I hope that after reading this article you will change your mind about it. The sample applications source code is as usual available on GitHub: https://github.com/piomin/sample-redis-microservices.git.

A Magic Around Spring Boot Externalized Configuration

There are some things I really like in Spring Boot, and one of them is an externalized configuration. Spring Boot allows you to configure your application in many ways. You have 17 levels of loading configuration properties into application. All of them are described in the 24th Chapter of Spring Boot documentation available here.

This article was inspired by some last talks with developers about problems with the configuration of their applications. They haven’t heard about some interesting features that may be used to make it more flexible and clear.

By default Spring Boot tries to load application.properties (or application.yml) from the following locations: classpath:/,classpath:/config/,file:./,file:./config/. Of course, we may override it. You can change the name of main configuration file by setting environment property spring.config.name or just change the whole searching path by setting property spring.config.location. It can contains names of directories, as well as file paths.

Let’s consider the following situation. We want to define different levels of configuration, where for example global properties applying to all our applications are overridden by specific settings defined only for a single application. We have three configuration sources.

property1: global
property2: global
property3: global
property2: override
property3: override
property3: app

The result is visible on the test below. It is important to properly set an order of property sources, where the most significant source is placed in the end:
classpath:/global.yml,classpath:/override.yml,classpath:/app.yml

spring-config-1

The configuration visible above replaces all the default location used by Spring Boot. It doesn’t even try to locate application.properties (or application.yml), but only the files listed inside spring.config.location environment variable. If we would like to add some custom config locations to the default location we may use spring.config.additional-location variable. However, this only make sense if we want to override settings defined inside application.yml. Let’s consider the following configuration files available on classpath.

property1: app
property2: app
property2: sample
property3: sample

In that test case we are using spring.config.additional-location environment variable to include sample-appconfig.yml file to the default config locations. It overrides property2, and adds new property property3.

spring-config-2

It is possible to create profile-specific application properties file. It has to be defined following naming convention: application-{profile}.properties (or application-{profile}.yml). If standard application.properties or application-default.properties are available under default config locations, Spring Boot still loads, but with lower priority than profile-specific file.

Let’s consider the following configuration files available on the classpath.

property1: app
property2: app
property2: override
property3: override

The following test activates Spring Boot profile override and checks if the right order of loading default and profile-specific application properties.

spring-config-3

Additional property sources may also be included by the application through @PropertySource annotation on the @Configuration class. By default, application failed to start if such a file is not found. Fortunately, we can change this behaviour by setting property ignoreResourceNotFound to true.

@SpringBootApplication
@PropertySource(value = "classpath:/additional.yml", ignoreResourceNotFound = true)
public class ConfigApp {

    public static void main(String[] args) {
        SpringApplication.run(ConfigApp.class, args);
    }

}

The properties loaded through @PropertySource annotation have really low priority (16 for available 17 levels). They can be overridden by default application properties. We can also define @TestPropertySource on our JUnit test with to load additional property source only for particular test. Such a property file will override both properties defined inside default application properties file and file included with @PropertySource.

Let’s consider the following configuration files available on the classpath.

property1: app
property2: app
property1: additional
property2: additional
property3: additional
property4: additional
property2: additional-test
property3: additional-test

The following test illustrates loading order when both @PropertySource and @TestPropertySource are used inside the source code.

spring-config-4

All the properties visible above has been injected into the application using @Value annotation. Spring Boot provides the another way to inject configuration properties into classes – via @ConfigurationProperties. Generally @ConfigurationProperties allows you to inject more complex structures into the application. Let’s imagine we need to inject list of objects. Each object contains some fields. Here’s our sample object class definition.

public class Person {

    private String firstName;
    private String lastName;
    private int age;

    // getters and setters

}

The class containing list of Person objects should be annotated with @ConfigurationProperties. The value inside annotation persons-list has to be the same as a prefix of property defined inside application.yml file.

@Component
@ConfigurationProperties("persons-list")
public class PersonsList {

    private List<Person> persons = new ArrayList<>();

    public List<Person> getPersons() {
        return persons;
    }

    public void setPersons(List<Person> persons) {
        this.persons = persons;
    }

}

Here’s list of persons defined inside application.yml.

persons-list.persons:
  - firstName: John
    lastName: Smith
    age: 30
  - firstName: Tom
    lastName: Walker
    age: 40
  - firstName: Kate
    lastName: Hamilton
    age: 50

The following test injects PersonsList bean containing list of persons and checks if they match the list defined inside application.yml.

spring-config-5

You want to try it by yourself? The source code with examples is available on GitHub in repository springboot-configuration-playground.

Introduction to Spring Data Redis

Redis is an in-memory data structure store with optional durability, used as database, cache and message broker. Currently, it is the most most popular tool in the key/value stores category: https://db-engines.com/en/ranking/key-value+store. The easiest way to integrate your application with Redis is through Spring Data Redis. You can use Spring RedisTemplate directly for that or you might as well use Spring Data Redis repositories. There are some limitations when you integrate with Redis via Spring Data Redis repositories. They require at least Redis Server version 2.8.0 and do not work with transactions. Therefore you need to disable transaction support for RedisTemplate, which is leveraged by Redis repositories.
Redis is usually used for caching data stored in a relational database. In the current sample it will treated as a primary database – just for simplification. Spring Data repositories do not require any deeper knowledge about Redis from a developer. You just need to annotate your domain class properly. As usual we will examine main features of Spring Data Redis basing on the sample application. Supposing we have the system, which consists of three domain objects: Customer, Account and Transaction, here’s the picture that illustrates relationships between elements of that system. Transaction is always related with two accounts: sender (fromAccountId) and receiver (toAccountId). Each customer may have many accounts.

redis-1 (1).png

Although the picture visible above shows three independent domain models, customer and account is stored in the same, single structure. All customer’s accounts are stored as a list inside customer object. Before proceeding to the sample application implementation details let’s begin from starting Redis database.

1. Running Redis on Docker

We will run Redis standalone instance locally using its Docker container. You can start it in in-memory mode or with persistence store. Here’s the command that run single, in-memory instance of Redis on Docker container. It is exposed outside on default 6379 port.

$ docker run -d --name redis -p 6379:6379 redis

2. Enabling Redis Repositories and Configuring Connection

I’m using Docker Toolbox, so each container is available for me under address 192.168.99.100. Here’s the only one property that I need to override inside configuration settings (application.yml).

spring:
  application:
    name: sample-spring-redis
  redis:
    host: 192.168.99.100

To enable Redis repositories for Spring Boot application we just need to include the single starter <code>spring-boot-starter-data-redis</code>.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>

We may choose between two supported connectors: Lettuce and Jedis. For Jedis I had to include one additional client’s library to dependencies, so I decided to use simpler option – Lettuce, that does not require any additional libraries to work properly. To enable Spring Data Redis repositories we also need to annotate the main or the configuration class with @EnableRedisRepositories and declare RedisTemplate bean. Although we do not use RedisTemplate directly, we still need to declare it, while it is used by CRUD repositories for integration with Redis.

@Configuration
@EnableRedisRepositories
public class SampleSpringRedisConfiguration {

    @Bean
    public LettuceConnectionFactory redisConnectionFactory() {
        return new LettuceConnectionFactory();
    }

    @Bean
    public RedisTemplate<?, ?> redisTemplate() {
        RedisTemplate<byte[], byte[]> template = new RedisTemplate<>();
        template.setConnectionFactory(redisConnectionFactory());
        return template;
    }

}

3. Implementing domain entities

Each domain entity has at least to be annotated with @RedisHash, and contains property annotated with @Id. Those two items are responsible for creating the actual key used to persist the hash. Besides identifier properties annotated with @Id you may also use secondary indices. To good news about it is that it can be not only with dependent single objects, but also on lists and maps. Here’s the definition of Customer entity. It is available on Redis under customer key. It contains list of Account entities.

@RedisHash("customer")
public class Customer {

    @Id private Long id;
    @Indexed private String externalId;
    private String name;
    private List<Account> accounts = new ArrayList<>();

    public Customer(Long id, String externalId, String name) {
        this.id = id;
        this.externalId = externalId;
        this.name = name;
    }

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getExternalId() {
        return externalId;
    }

    public void setExternalId(String externalId) {
        this.externalId = externalId;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public List<Account> getAccounts() {
        return accounts;
    }

    public void setAccounts(List<Account> accounts) {
        this.accounts = accounts;
    }

    public void addAccount(Account account) {
        this.accounts.add(account);
    }

}

Account does not have its own hash. It is contained by Customer has as list of objects. The property id is indexed on Redis in order to speed-up the search based on the property.

public class Account {

    @Indexed private Long id;
    private String number;
    private int balance;

    public Account(Long id, String number, int balance) {
        this.id = id;
        this.number = number;
        this.balance = balance;
    }

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getNumber() {
        return number;
    }

    public void setNumber(String number) {
        this.number = number;
    }

    public int getBalance() {
        return balance;
    }

    public void setBalance(int balance) {
        this.balance = balance;
    }

}

Finally, let’s take a look on Transaction entity implementation. It uses only account ids, not the whole objects.

@RedisHash("transaction")
public class Transaction {

    @Id
    private Long id;
    private int amount;
    private Date date;
    @Indexed
    private Long fromAccountId;
    @Indexed
    private Long toAccountId;

    public Transaction(Long id, int amount, Date date, Long fromAccountId, Long toAccountId) {
        this.id = id;
        this.amount = amount;
        this.date = date;
        this.fromAccountId = fromAccountId;
        this.toAccountId = toAccountId;
    }

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public int getAmount() {
        return amount;
    }

    public void setAmount(int amount) {
        this.amount = amount;
    }

    public Date getDate() {
        return date;
    }

    public void setDate(Date date) {
        this.date = date;
    }

    public Long getFromAccountId() {
        return fromAccountId;
    }

    public void setFromAccountId(Long fromAccountId) {
        this.fromAccountId = fromAccountId;
    }

    public Long getToAccountId() {
        return toAccountId;
    }

    public void setToAccountId(Long toAccountId) {
        this.toAccountId = toAccountId;
    }

}

4. Implementing repositories

The implementation of repositories is the most pleasant part of our exercise. As usual with Spring Data projects, the most common methods like save, delete or findById are already implemented. So we only have to create our custom find methods if needed. Since usage and implementation of findByExternalId method is rather obvious, the method findByAccountsId may be not. Let’s move back to a model definition to clarify usage of that method. Transaction contains only account ids, it does not have direct relationship with Customer. What if we need to know the details about customers being a sides of a given transaction? We can find customer by one of its account from the list.

public interface CustomerRepository extends CrudRepository {

    Customer findByExternalId(String externalId);
    List findByAccountsId(Long id);

}

Here’s implementation of repository for Transaction entity.

public interface TransactionRepository extends CrudRepository {

    List findByFromAccountId(Long fromAccountId);
    List findByToAccountId(Long toAccountId);

}

5. Building repository tests

We can easily test Redis repositories functionality using Spring Boot Test project with @DataRedisTest. This test assumes you have running instance of Redis server on the already configured address 192.168.99.100.

@RunWith(SpringRunner.class)
@DataRedisTest
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class RedisCustomerRepositoryTest {

    @Autowired
    CustomerRepository repository;

    @Test
    public void testAdd() {
        Customer customer = new Customer(1L, "80010121098", "John Smith");
        customer.addAccount(new Account(1L, "1234567890", 2000));
        customer.addAccount(new Account(2L, "1234567891", 4000));
        customer.addAccount(new Account(3L, "1234567892", 6000));
        customer = repository.save(customer);
        Assert.assertNotNull(customer);
    }

    @Test
    public void testFindByAccounts() {
        List<Customer> customers = repository.findByAccountsId(3L);
        Assert.assertEquals(1, customers.size());
        Customer customer = customers.get(0);
        Assert.assertNotNull(customer);
        Assert.assertEquals(1, customer.getId().longValue());
    }

    @Test
    public void testFindByExternal() {
        Customer customer = repository.findByExternalId("80010121098");
        Assert.assertNotNull(customer);
        Assert.assertEquals(1, customer.getId().longValue());
    }
}

6. More advanced testing with Testcontainers

You may provide some advanced integration tests using Redis as Docker container started during the test by Testcontainer library. I have already published some articles about Testcontainers framework. If you would like read more details about it please refer to my previous articles: Microservices Integration Tests with Hoverfly and Testcontainers and Testing Spring Boot Integration with Vault and Postgres using Testcontainers Framework.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@RunWith(SpringRunner.class)
public class CustomerIntegrationTests {

    @Autowired
    TestRestTemplate template;

    @ClassRule
    public static GenericContainer redis = new GenericContainer("redis:5.0.3").withExposedPorts(6379);

    @Before
    public void init() {
        int port = redis.getFirstMappedPort();
        System.setProperty("spring.redis.host", String.valueOf(port));
    }

    @Test
    public void testAddAndFind() {
        Customer customer = new Customer(1L, "123456", "John Smith");
        customer.addAccount(new Account(1L, "1234567890", 2000));
        customer.addAccount(new Account(2L, "1234567891", 4000));
        customer = template.postForObject("/customers", customer, Customer.class);
        Assert.assertNotNull(customer);
        customer = template.getForObject("/customers/{id}", Customer.class, 1L);
        Assert.assertNotNull(customer);
        Assert.assertEquals("123456", customer.getExternalId());
        Assert.assertEquals(2, customer.getAccounts().size());
    }

}

7. Viewing data

Now, let’s analyze the data stored in Redis after our JUnit tests. We may use one of GUI tool for that. I decided to install RDBTools available on site https://rdbtools.com. You can easily browse data stored on Redis using this tool. Here’s the result for customer entity with id=1 after running JUnit test.

redis-2

Here’s the similar result for transaction entity with id=1.

redis-3

Source Code

The sample application source code is available on GitHub in the repository sample-spring-redis.