Redis in Microservices Architecture

Redis can be widely used in microservices architecture. It is probably one of the few popular software solutions that may be leveraged by your application in such many different ways. Depending on the requirements it can acts as a primary database, cache, message broker. While it is also a key/value store we can use it as a configuration server or discovery server in your microservices architecture. Although it is usually defined as an in-memory data structure, we can also run it in persistent mode.
Today, I’m going to show you some examples of using Redis with microservices built on top of Spring Boot and Spring Cloud frameworks. These application will communicate between each other asynchronously using Redis Pub/Sub, using Redis as a cache or primary database, and finally used Redis as a configuration server. Here’s the picture that illustrates described architecture.

redis-micro-2.png

Redis as Configuration Server

If have already built microservices with Spring Cloud, you probably have a touch with Spring Cloud Config. It is responsible for providing distributed configuration pattern for microservices. Unfortunately Spring Cloud Config does not support Redis as a property sources backend repository. That’s why I decided to fork Spring Cloud Config project and implement this feature. I hope my implementation will soon be included into official Spring Cloud release, but for now you may use my forked repo to run it. It is available on my GitHub account piomin/spring-cloud-config. How to use it? Very simple. Let’s see.
The current SNAPSHOT version of Spring Boot is 2.2.0.BUILD-SNAPSHOT, the same as for Spring Cloud Config. While building Spring Cloud Config Server we need to include only those two dependencies as shown below.

<parent>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-parent</artifactId>
	<version>2.2.0.BUILD-SNAPSHOT</version>
</parent>
<artifactId>config-service</artifactId>
<groupId>pl.piomin.services</groupId>
<version>1.0-SNAPSHOT</version>

<dependencies>
	<dependency>
		<groupId>org.springframework.cloud</groupId>
		<artifactId>spring-cloud-config-server</artifactId>
		<version>2.2.0.BUILD-SNAPSHOT</version>
	</dependency>
</dependencies>

By default, Spring Cloud Config Server uses Git repository backend. We need to activate redis profile to force it using Redis as a backend. If your Redis instance listens on another address than localhost:6379 you need to overwrite auto-configured connection settings with spring.redis.* properties. Here’s our bootstrap.yml file.

spring:
  application:
    name: config-service
  profiles:
    active: redis
  redis:
    host: 192.168.99.100

The application main class should be annotated with @EnableConfigServer.

@SpringBootApplication
@EnableConfigServer
public class ConfigApplication {

	public static void main(String[] args) {
		new SpringApplicationBuilder(ConfigApplication.class).run(args);
	}

}

Before running the application we need to start Redis instance. Here’s the command that runs it as a Docker container and exposes on port 6379.

$ docker run -d --name redis -p 6379:6379 redis

The configuration for every application has to be available under the key ${spring.application.name} or ${spring.application.name}-${spring.profiles.active[n]}.
We have to create hash with the keys corresponding to the names of configuration properties. Our sample application driver-management uses three configuration properties: server.port for setting HTTP listening port, spring.redis.host for changing default Redis address used as a message broker and database, and sample.topic.name for setting name of topic used for asynchronous communication between our microservices. Here’s the structure of Redis hash created for driver-management visualized with RDBTools.

redis-micro-3

That visualization is an equivalent of running Redis CLI command HGETALL that return all the fields and values in a hash.

>> HGETALL driver-management
{
  "server.port": "8100",
  "sample.topic.name": "trips",
  "spring.redis.host": "192.168.99.100"
}

After setting keys and values in Redis and running Spring Cloud Config Server with active redis profile, we need to enable distributed configuration feature on the client side. To do that we just need include spring-cloud-starter-config dependency to pom.xml of every microservice.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-config</artifactId>
</dependency>

We use the newest stable version of Spring Cloud.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Greenwich.SR1</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

The name of application is taken from property spring.application.name on startup, so we need to provide the following bootstrap.yml file.

spring:
  application:
    name: driver-management

Redis as Message Broker

Now we can proceed to the second use case of Redis in microservices-based architecture – message broker. We will implement a typical asynchronous system, which is visible on the picture below. Microservice trip-management send notification to Redis Pub/Sub after creating new trip and after finishing current trip. The notification is received by both driver-management and passenger-management, which are subscribed to the particular channel.

micro-redis-1.png

Our application is very simple. We just need to add the following dependencies in order to provide REST API and integrate with Redis Pub/Sub.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

We should register bean with channel name and publisher. TripPublisher is responsible for sending messages to the target topic.

@Configuration
public class TripConfiguration {

	@Autowired
	RedisTemplate<?, ?> redisTemplate;

	@Bean
	TripPublisher redisPublisher() {
		return new TripPublisher(redisTemplate, topic());
	}

	@Bean
	ChannelTopic topic() {
		return new ChannelTopic("trips");
	}

}

TripPublisher uses RedisTemplate for sending messages to the topic. Before sending it converts every message from object to JSON string using Jackson2JsonRedisSerializer.

public class TripPublisher {

	private static final Logger LOGGER = LoggerFactory.getLogger(TripPublisher.class);

	RedisTemplate<?, ?> redisTemplate;
	ChannelTopic topic;

	public TripPublisher(RedisTemplate<?, ?> redisTemplate, ChannelTopic topic) {
		this.redisTemplate = redisTemplate;
		this.redisTemplate.setValueSerializer(new Jackson2JsonRedisSerializer(Trip.class));
		this.topic = topic;
	}

	public void publish(Trip trip) throws JsonProcessingException {
		LOGGER.info("Sending: {}", trip);
		redisTemplate.convertAndSend(topic.getTopic(), trip);
	}

}

We have already implemented the logic on the publisher side. Now, we can proceed to the implementation on subscriber sides. We have two microservices driver-management and passenger-management that listens for the notifications sent by trip-management microservice. We need to define RedisMessageListenerContainer bean and set message listener implementation class.

@Configuration
public class DriverConfiguration {

	@Autowired
	RedisConnectionFactory redisConnectionFactory;

	@Bean
	RedisMessageListenerContainer container() {
		RedisMessageListenerContainer container = new RedisMessageListenerContainer();
		container.addMessageListener(messageListener(), topic());
		container.setConnectionFactory(redisConnectionFactory);
		return container;
	}

	@Bean
	MessageListenerAdapter messageListener() {
		return new MessageListenerAdapter(new DriverSubscriber());
	}

	@Bean
	ChannelTopic topic() {
		return new ChannelTopic("trips");
	}

}

The class responsible for handling incoming notification needs to implement MessageListener interface. After receiving message DriverSubscriber deserializes it from JSON to object and change driver status.

@Service
public class DriverSubscriber implements MessageListener {

	private final Logger LOGGER = LoggerFactory.getLogger(DriverSubscriber.class);

	@Autowired
	DriverRepository repository;
	ObjectMapper mapper = new ObjectMapper();

	@Override
	public void onMessage(Message message, byte[] bytes) {
		try {
			Trip trip = mapper.readValue(message.getBody(), Trip.class);
			LOGGER.info("Message received: {}", trip.toString());
			Optional<Driver> optDriver = repository.findById(trip.getDriverId());
			if (optDriver.isPresent()) {
				Driver driver = optDriver.get();
				if (trip.getStatus() == TripStatus.DONE)
					driver.setStatus(DriverStatus.WAITING);
				else
					driver.setStatus(DriverStatus.BUSY);
				repository.save(driver);
			}
		} catch (IOException e) {
			LOGGER.error("Error reading message", e);
		}
	}

}

Redis as Primary Database

Although the main purpose of using Redis is in-memory caching or key/value store it may also act as a primary database for your application. In that case it is worth to run Redis in persistent mode.

$ docker run -d --name redis -p 6379:6379 redis redis-server --appendonly yes

Entities are stored inside Redis using hash operations and mmap structure. Each entity needs to have a hash key and id.

@RedisHash("driver")
public class Driver {

	@Id
	private Long id;
	private String name;
	@GeoIndexed
	private Point location;
	private DriverStatus status;

	// setters and getters ...
}

Fortunately, Spring Data Redis provides well-known repositories pattern for Redis integration. To enable it we should annotate configuration or main class with @EnableRedisRepositories. When using Spring repositories pattern we don’t have to build any queries to Redis by ourselves.

@Configuration
@EnableRedisRepositories
public class DriverConfiguration {
	// logic ...
}

With Spring Data repositories we don’t have build any Redis queries, but just name methods following Spring Data convention. For more details, you may refer to my previous article Introduction to Spring Data Redis. For our sample purposes we can use default methods implemented inside Spring Data. Here’s declaration of repository interface in driver-management.

public interface DriverRepository extends CrudRepository<Driver, Long> {}

Don’t forget to enable Spring Data repositories by annotating the main application class or configuration class with @EnableRedisRepositories.

@Configuration
@EnableRedisRepositories
public class DriverConfiguration {
	...
}

Conclusion

As I have mentioned in the preface there are various use cases for Redis in microservices architecture. I have just presented how you can easily use it together with Spring Cloud and Spring Data to provide configuration server, message broker and database. Redis is commonly considered as a cache, but I hope that after reading this article you will change your mind about it. The sample applications source code is as usual available on GitHub: https://github.com/piomin/sample-redis-microservices.git.

Advertisements

A Magic Around Spring Boot Externalized Configuration

There are some things I really like in Spring Boot, and one of them is an externalized configuration. Spring Boot allows you to configure your application in many ways. You have 17 levels of loading configuration properties into application. All of them are described in the 24th Chapter of Spring Boot documentation available here.

This article was inspired by some last talks with developers about problems with the configuration of their applications. They haven’t heard about some interesting features that may be used to make it more flexible and clear.

By default Spring Boot tries to load application.properties (or application.yml) from the following locations: classpath:/,classpath:/config/,file:./,file:./config/. Of course, we may override it. You can change the name of main configuration file by setting environment property spring.config.name or just change the whole searching path by setting property spring.config.location. It can contains names of directories, as well as file paths.

Let’s consider the following situation. We want to define different levels of configuration, where for example global properties applying to all our applications are overridden by specific settings defined only for a single application. We have three configuration sources.

property1: global
property2: global
property3: global
property2: override
property3: override
property3: app

The result is visible on the test below. It is important to properly set an order of property sources, where the most significant source is placed in the end:
classpath:/global.yml,classpath:/override.yml,classpath:/app.yml

spring-config-1

The configuration visible above replaces all the default location used by Spring Boot. It doesn’t even try to locate application.properties (or application.yml), but only the files listed inside spring.config.location environment variable. If we would like to add some custom config locations to the default location we may use spring.config.additional-location variable. However, this only make sense if we want to override settings defined inside application.yml. Let’s consider the following configuration files available on classpath.

property1: app
property2: app
property2: sample
property3: sample

In that test case we are using spring.config.additional-location environment variable to include sample-appconfig.yml file to the default config locations. It overrides property2, and adds new property property3.

spring-config-2

It is possible to create profile-specific application properties file. It has to be defined following naming convention: application-{profile}.properties (or application-{profile}.yml). If standard application.properties or application-default.properties are available under default config locations, Spring Boot still loads, but with lower priority than profile-specific file.

Let’s consider the following configuration files available on the classpath.

property1: app
property2: app
property2: override
property3: override

The following test activates Spring Boot profile override and checks if the right order of loading default and profile-specific application properties.

spring-config-3

Additional property sources may also be included by the application through @PropertySource annotation on the @Configuration class. By default, application failed to start if such a file is not found. Fortunately, we can change this behaviour by setting property ignoreResourceNotFound to true.

@SpringBootApplication
@PropertySource(value = "classpath:/additional.yml", ignoreResourceNotFound = true)
public class ConfigApp {

    public static void main(String[] args) {
        SpringApplication.run(ConfigApp.class, args);
    }

}

The properties loaded through @PropertySource annotation have really low priority (16 for available 17 levels). They can be overridden by default application properties. We can also define @TestPropertySource on our JUnit test with to load additional property source only for particular test. Such a property file will override both properties defined inside default application properties file and file included with @PropertySource.

Let’s consider the following configuration files available on the classpath.

property1: app
property2: app
property1: additional
property2: additional
property3: additional
property4: additional
property2: additional-test
property3: additional-test

The following test illustrates loading order when both @PropertySource and @TestPropertySource are used inside the source code.

spring-config-4

All the properties visible above has been injected into the application using @Value annotation. Spring Boot provides the another way to inject configuration properties into classes – via @ConfigurationProperties. Generally @ConfigurationProperties allows you to inject more complex structures into the application. Let’s imagine we need to inject list of objects. Each object contains some fields. Here’s our sample object class definition.

public class Person {

    private String firstName;
    private String lastName;
    private int age;

    // getters and setters

}

The class containing list of Person objects should be annotated with @ConfigurationProperties. The value inside annotation persons-list has to be the same as a prefix of property defined inside application.yml file.

@Component
@ConfigurationProperties("persons-list")
public class PersonsList {

    private List<Person> persons = new ArrayList<>();

    public List<Person> getPersons() {
        return persons;
    }

    public void setPersons(List<Person> persons) {
        this.persons = persons;
    }

}

Here’s list of persons defined inside application.yml.

persons-list.persons:
  - firstName: John
    lastName: Smith
    age: 30
  - firstName: Tom
    lastName: Walker
    age: 40
  - firstName: Kate
    lastName: Hamilton
    age: 50

The following test injects PersonsList bean containing list of persons and checks if they match the list defined inside application.yml.

spring-config-5

You want to try it by yourself? The source code with examples is available on GitHub in repository springboot-configuration-playground.

Introduction to Spring Data Redis

Redis is an in-memory data structure store with optional durability, used as database, cache and message broker. Currently, it is the most most popular tool in the key/value stores category: https://db-engines.com/en/ranking/key-value+store. The easiest way to integrate your application with Redis is through Spring Data Redis. You can use Spring RedisTemplate directly for that or you might as well use Spring Data Redis repositories. There are some limitations when you integrate with Redis via Spring Data Redis repositories. They require at least Redis Server version 2.8.0 and do not work with transactions. Therefore you need to disable transaction support for RedisTemplate, which is leveraged by Redis repositories.
Redis is usually used for caching data stored in a relational database. In the current sample it will treated as a primary database – just for simplification. Spring Data repositories do not require any deeper knowledge about Redis from a developer. You just need to annotate your domain class properly. As usual we will examine main features of Spring Data Redis basing on the sample application. Supposing we have the system, which consists of three domain objects: Customer, Account and Transaction, here’s the picture that illustrates relationships between elements of that system. Transaction is always related with two accounts: sender (fromAccountId) and receiver (toAccountId). Each customer may have many accounts.

redis-1 (1).png

Although the picture visible above shows three independent domain models, customer and account is stored in the same, single structure. All customer’s accounts are stored as a list inside customer object. Before proceeding to the sample application implementation details let’s begin from starting Redis database.

1. Running Redis on Docker

We will run Redis standalone instance locally using its Docker container. You can start it in in-memory mode or with persistence store. Here’s the command that run single, in-memory instance of Redis on Docker container. It is exposed outside on default 6379 port.

$ docker run -d --name redis -p 6379:6379 redis

2. Enabling Redis Repositories and Configuring Connection

I’m using Docker Toolbox, so each container is available for me under address 192.168.99.100. Here’s the only one property that I need to override inside configuration settings (application.yml).

spring:
  application:
    name: sample-spring-redis
  redis:
    host: 192.168.99.100

To enable Redis repositories for Spring Boot application we just need to include the single starter <code>spring-boot-starter-data-redis</code>.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>

We may choose between two supported connectors: Lettuce and Jedis. For Jedis I had to include one additional client’s library to dependencies, so I decided to use simpler option – Lettuce, that does not require any additional libraries to work properly. To enable Spring Data Redis repositories we also need to annotate the main or the configuration class with @EnableRedisRepositories and declare RedisTemplate bean. Although we do not use RedisTemplate directly, we still need to declare it, while it is used by CRUD repositories for integration with Redis.

@Configuration
@EnableRedisRepositories
public class SampleSpringRedisConfiguration {

    @Bean
    public LettuceConnectionFactory redisConnectionFactory() {
        return new LettuceConnectionFactory();
    }

    @Bean
    public RedisTemplate<?, ?> redisTemplate() {
        RedisTemplate<byte[], byte[]> template = new RedisTemplate<>();
        template.setConnectionFactory(redisConnectionFactory());
        return template;
    }

}

3. Implementing domain entities

Each domain entity has at least to be annotated with @RedisHash, and contains property annotated with @Id. Those two items are responsible for creating the actual key used to persist the hash. Besides identifier properties annotated with @Id you may also use secondary indices. To good news about it is that it can be not only with dependent single objects, but also on lists and maps. Here’s the definition of Customer entity. It is available on Redis under customer key. It contains list of Account entities.

@RedisHash("customer")
public class Customer {

    @Id private Long id;
    @Indexed private String externalId;
    private String name;
    private List<Account> accounts = new ArrayList<>();

    public Customer(Long id, String externalId, String name) {
        this.id = id;
        this.externalId = externalId;
        this.name = name;
    }

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getExternalId() {
        return externalId;
    }

    public void setExternalId(String externalId) {
        this.externalId = externalId;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public List<Account> getAccounts() {
        return accounts;
    }

    public void setAccounts(List<Account> accounts) {
        this.accounts = accounts;
    }

    public void addAccount(Account account) {
        this.accounts.add(account);
    }

}

Account does not have its own hash. It is contained by Customer has as list of objects. The property id is indexed on Redis in order to speed-up the search based on the property.

public class Account {

    @Indexed private Long id;
    private String number;
    private int balance;

    public Account(Long id, String number, int balance) {
        this.id = id;
        this.number = number;
        this.balance = balance;
    }

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getNumber() {
        return number;
    }

    public void setNumber(String number) {
        this.number = number;
    }

    public int getBalance() {
        return balance;
    }

    public void setBalance(int balance) {
        this.balance = balance;
    }

}

Finally, let’s take a look on Transaction entity implementation. It uses only account ids, not the whole objects.

@RedisHash("transaction")
public class Transaction {

    @Id
    private Long id;
    private int amount;
    private Date date;
    @Indexed
    private Long fromAccountId;
    @Indexed
    private Long toAccountId;

    public Transaction(Long id, int amount, Date date, Long fromAccountId, Long toAccountId) {
        this.id = id;
        this.amount = amount;
        this.date = date;
        this.fromAccountId = fromAccountId;
        this.toAccountId = toAccountId;
    }

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public int getAmount() {
        return amount;
    }

    public void setAmount(int amount) {
        this.amount = amount;
    }

    public Date getDate() {
        return date;
    }

    public void setDate(Date date) {
        this.date = date;
    }

    public Long getFromAccountId() {
        return fromAccountId;
    }

    public void setFromAccountId(Long fromAccountId) {
        this.fromAccountId = fromAccountId;
    }

    public Long getToAccountId() {
        return toAccountId;
    }

    public void setToAccountId(Long toAccountId) {
        this.toAccountId = toAccountId;
    }

}

4. Implementing repositories

The implementation of repositories is the most pleasant part of our exercise. As usual with Spring Data projects, the most common methods like save, delete or findById are already implemented. So we only have to create our custom find methods if needed. Since usage and implementation of findByExternalId method is rather obvious, the method findByAccountsId may be not. Let’s move back to a model definition to clarify usage of that method. Transaction contains only account ids, it does not have direct relationship with Customer. What if we need to know the details about customers being a sides of a given transaction? We can find customer by one of its account from the list.

public interface CustomerRepository extends CrudRepository {

    Customer findByExternalId(String externalId);
    List findByAccountsId(Long id);

}

Here’s implementation of repository for Transaction entity.

public interface TransactionRepository extends CrudRepository {

    List findByFromAccountId(Long fromAccountId);
    List findByToAccountId(Long toAccountId);

}

5. Building repository tests

We can easily test Redis repositories functionality using Spring Boot Test project with @DataRedisTest. This test assumes you have running instance of Redis server on the already configured address 192.168.99.100.

@RunWith(SpringRunner.class)
@DataRedisTest
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class RedisCustomerRepositoryTest {

    @Autowired
    CustomerRepository repository;

    @Test
    public void testAdd() {
        Customer customer = new Customer(1L, "80010121098", "John Smith");
        customer.addAccount(new Account(1L, "1234567890", 2000));
        customer.addAccount(new Account(2L, "1234567891", 4000));
        customer.addAccount(new Account(3L, "1234567892", 6000));
        customer = repository.save(customer);
        Assert.assertNotNull(customer);
    }

    @Test
    public void testFindByAccounts() {
        List<Customer> customers = repository.findByAccountsId(3L);
        Assert.assertEquals(1, customers.size());
        Customer customer = customers.get(0);
        Assert.assertNotNull(customer);
        Assert.assertEquals(1, customer.getId().longValue());
    }

    @Test
    public void testFindByExternal() {
        Customer customer = repository.findByExternalId("80010121098");
        Assert.assertNotNull(customer);
        Assert.assertEquals(1, customer.getId().longValue());
    }
}

6. More advanced testing with Testcontainers

You may provide some advanced integration tests using Redis as Docker container started during the test by Testcontainer library. I have already published some articles about Testcontainers framework. If you would like read more details about it please refer to my previous articles: Microservices Integration Tests with Hoverfly and Testcontainers and Testing Spring Boot Integration with Vault and Postgres using Testcontainers Framework.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@RunWith(SpringRunner.class)
public class CustomerIntegrationTests {

    @Autowired
    TestRestTemplate template;

    @ClassRule
    public static GenericContainer redis = new GenericContainer("redis:5.0.3").withExposedPorts(6379);

    @Before
    public void init() {
        int port = redis.getFirstMappedPort();
        System.setProperty("spring.redis.host", String.valueOf(port));
    }

    @Test
    public void testAddAndFind() {
        Customer customer = new Customer(1L, "123456", "John Smith");
        customer.addAccount(new Account(1L, "1234567890", 2000));
        customer.addAccount(new Account(2L, "1234567891", 4000));
        customer = template.postForObject("/customers", customer, Customer.class);
        Assert.assertNotNull(customer);
        customer = template.getForObject("/customers/{id}", Customer.class, 1L);
        Assert.assertNotNull(customer);
        Assert.assertEquals("123456", customer.getExternalId());
        Assert.assertEquals(2, customer.getAccounts().size());
    }

}

7. Viewing data

Now, let’s analyze the data stored in Redis after our JUnit tests. We may use one of GUI tool for that. I decided to install RDBTools available on site https://rdbtools.com. You can easily browse data stored on Redis using this tool. Here’s the result for customer entity with id=1 after running JUnit test.

redis-2

Here’s the similar result for transaction entity with id=1.

redis-3

Source Code

The sample application source code is available on GitHub in the repository sample-spring-redis.

Kotlin Microservices with Micronaut, Spring Cloud and JPA

Micronaut Framework provides support for Kotlin built upon Kapt compiler plugin. It also implements the most popular cloud-native patterns like distributed configuration, service discovery and client-side load balancing. These features allows to include your application built on top of Micronaut into the existing microservices-based system. The most popular example of such approach may be an integration with Spring Cloud ecosystem. If you have already used Spring Cloud, it is very likely you built your microservices-based architecture using Eureka discovery server and Spring Cloud Config as a configuration server. Beginning from version 1.1 Micronaut supports both these popular tools being a part of Spring Cloud project. That’s a good news, because in version 1.0 the only supported distributed solution was Consul, and there were no possibility to use Eureka discovery together with Consul property source (running them together ends with exception).

In this article you will learn how to:

  • Configure Micronaut Maven support for Kotlin using Kapt compiler
  • Implement microservices with Micronaut and Kotlin
  • Integrate Micronaut with Spring Cloud Eureka discovery server
  • Integrate Micronaut with Spring Cloud Config server
  • Configure JPA/Hibernate support for application built on top Micronaut
  • For simplification we run a single instance of PostgreSQL shared between all sample microservices

Our architecture is pretty similar to the architecture described in my previous article about Micronaut Quick Guide to Microservice with Micronaut Framework. We also have three microservice that communicate to each other. We use Spring Cloud Eureka and Spring Cloud Config for discovery and distributed configuration instead of Consul. Every service has backend store – PostgreSQL database. This architecture has been visualized on the following picture.

micronaut-2-arch (1).png

After that short introduction we may proceed to the development. Let’s begin from configuring Kotlin support for Micronaut.

1. Kotlin with Micronaut – configuration

Support for Kotlin with Kapt compiler plugin is described well on Micronaut docs site (https://docs.micronaut.io/1.1.0.M1/guide/index.html#kotlin). However, I decided to use Maven instead of Gradle, so our configuration will be slightly different than instructions for Gradle. We configure Kapt inside Maven plugin for Kotlin kotlin-maven-plugin. Thanks to that Kapt will create Java “stub” classes for each of your Kotlin classes, which can then be processed by Micronaut’s Java annotation processor. The Micronaut annotation processors are declared inside tag annotationProcessorPaths in the configuration section. Here’s the full Maven configuration to provide support for Kotlin. Besides core library micronaut-inject-java, we also use annotations from tracing, openapi and JPA libraries.

<plugin>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-maven-plugin</artifactId>
	<dependencies>
		<dependency>
			<groupId>org.jetbrains.kotlin</groupId>
			<artifactId>kotlin-maven-allopen</artifactId>
			<version>${kotlin.version}</version>
		</dependency>
	</dependencies>
	<configuration>
		<jvmTarget>1.8</jvmTarget>
	</configuration>
	<executions>
		<execution>
			<id>compile</id>
			<phase>compile</phase>
			<goals>
				<goal>compile</goal>
			</goals>
		</execution>
		<execution>
			<id>test-compile</id>
			<phase>test-compile</phase>
			<goals>
				<goal>test-compile</goal>
			</goals>
		</execution>
		<execution>
			<id>kapt</id>
			<goals>
				<goal>kapt</goal>
			</goals>
			<configuration>
				<sourceDirs>
					<sourceDir>src/main/kotlin</sourceDir>
				</sourceDirs>
				<annotationProcessorPaths>
					<annotationProcessorPath>
						<groupId>io.micronaut</groupId>
						<artifactId>micronaut-inject-java</artifactId>
						<version>${micronaut.version}</version>
					</annotationProcessorPath>
					<annotationProcessorPath>
						<groupId>io.micronaut.configuration</groupId>
						<artifactId>micronaut-openapi</artifactId>
						<version>${micronaut.version}</version>
					</annotationProcessorPath>
					<annotationProcessorPath>
						<groupId>io.micronaut</groupId>
						<artifactId>micronaut-tracing</artifactId>
						<version>${micronaut.version}</version>
					</annotationProcessorPath>
					<annotationProcessorPath>
						<groupId>javax.persistence</groupId>
						<artifactId>javax.persistence-api</artifactId>
						<version>2.2</version>
					</annotationProcessorPath>
					<annotationProcessorPath>
						<groupId>io.micronaut.configuration</groupId>
						<artifactId>micronaut-hibernate-jpa</artifactId>
						<version>1.1.0.RC2</version>
					</annotationProcessorPath>
				</annotationProcessorPaths>
			</configuration>
		</execution>
	</executions>
</plugin>

We also should not run maven-compiler-plugin during compilation phase. Kapt compiler generates Java classes, so we don’t need to run any other compilator during the build.

<plugin>
	<groupId>org.apache.maven.plugins</groupId>
	<artifactId>maven-compiler-plugin</artifactId>
	<configuration>
		<proc>none</proc>
		<source>1.8</source>
		<target>1.8</target>
	</configuration>
	<executions>
		<execution>
			<id>default-compile</id>
			<phase>none</phase>
		</execution>
		<execution>
			<id>default-testCompile</id>
			<phase>none</phase>
		</execution>
		<execution>
			<id>java-compile</id>
			<phase>compile</phase>
			<goals>
				<goal>compile</goal>
			</goals>
		</execution>
		<execution>
			<id>java-test-compile</id>
			<phase>test-compile</phase>
			<goals>
				<goal>testCompile</goal>
			</goals>
		</execution>
	</executions>
</plugin>

Finally, we will add Kotlin core library and Jackson module for JSON serialization.

<dependency>
	<groupId>com.fasterxml.jackson.module</groupId>
	<artifactId>jackson-module-kotlin</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-stdlib-jdk8</artifactId>
	<version>${kotlin.version}</version>
</dependency>

If you are running the application with Intellij you should first enable annotation processing. To do that go to Build, Execution, Deployment -> Compiler -> Annotation Processors as shown below.

micronaut-2-1

2. Running Postgres

Before proceeding to the development we have to start instance of PostgreSQL database. It will be started as a Docker container. For me, PostgreSQL is now available under address 192.168.99.100:5432, because I’m using Docker Toolbox.

$ docker run -d --name postgres -e POSTGRES_USER=micronaut -e POSTGRES_PASSWORD=123456 -e POSTGRES_DB=micronaut -p 5432:5432 postgres

3. Enabling Hibernate for Micronaut

Hibernate configuration is a little harder for Micronaut than for Spring Boot. We don’t have any projects like Spring Data JPA, where almost all is auto-configured. Besides specific JDBC driver for integration with database, we have to include the following dependencies. We may choose between three available libraries providing datasource implementation: Tomcat, Hikari or DBCP.

<dependency>
	<groupId>org.postgresql</groupId>
	<artifactId>postgresql</artifactId>
	<version>42.2.5</version>
</dependency>
<dependency>
	<groupId>io.micronaut.configuration</groupId>
	<artifactId>micronaut-jdbc-hikari</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut.configuration</groupId>
	<artifactId>micronaut-hibernate-jpa</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut.configuration</groupId>
	<artifactId>micronaut-hibernate-validator</artifactId>
</dependency>

The next step is to provide some configuration settings. All the properties will be stored on the configuration server. We have to set database connection settings and credentials. The JPA configuration settings are provided under jpa.* key. We force Hibernate to update database on application startup and print all the SQL logs (only for tests).

datasources:
  default:
    url: jdbc:postgresql://192.168.99.100:5432/micronaut?ssl=false
    username: micronaut
    password: 123456
    driverClassName: org.postgresql.Driver
jpa:
  default:
    packages-to-scan:
      - 'pl.piomin.services.department.model'
    properties:
      hibernate:
        hbm2ddl:
          auto: update
        show_sql: true

Here’s our sample domain object.

@Entity
data class Department(@Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "department_id_seq") @SequenceGenerator(name = "department_id_seq", sequenceName = "department_id_seq") var id: Long,
                      var organizationId: Long, var name: String) {

    @Transient
    var employees: MutableList<Employee> = mutableListOf()

}

The repository bean needs to inject EntityManager using @PersistentContext and @CurrentSession annotations. All functions needs to be annotated with @Transactional, what requires the methods not to be final (open modifier in Kotlin).

@Singleton
open class DepartmentRepository(@param:CurrentSession @field:PersistenceContext val entityManager: EntityManager) {

    @Transactional
    open fun add(department: Department): Department {
        entityManager.persist(department)
        return department
    }

    @Transactional(readOnly = true)
    open fun findById(id: Long): Department = entityManager.find(Department::class.java, id)

    @Transactional(readOnly = true)
    open fun findAll(): List<Department> = entityManager.createQuery("SELECT d FROM Department d").resultList as List<Department>

    @Transactional(readOnly = true)
    open fun findByOrganization(organizationId: Long) = entityManager.createQuery("SELECT d FROM Department d WHERE d.organizationId = :orgId")
            .setParameter("orgId", organizationId)
            .resultList as List<Department>

}

4. Running Spring Cloud Config Server

Running Spring Cloud Config server is very simple. I have already described that in some of my previous articles. All those were prepared for Java, while today we start it as Kotlin application. Here’s our main class. It should be annotated with @EnableConfigServer.

@SpringBootApplication
@EnableConfigServer
class ConfigApplication

fun main(args: Array<String>) {
    runApplication<ConfigApplication>(*args)
}

Besides Kotlin core dependency we need to include artifact spring-cloud-config-server.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-config-server</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-stdlib-jdk8</artifactId>
	<version>${kotlin.version}</version>
</dependency>

By default, config server tries to use Git as properties source backend. We prefer using classpath resources, what’s much simpler for our tests. To do that, we have to enable native profile. We will also set server port to 8888.

spring:
  application:
    name: config-service
  profiles:
    active: native
server:
  port: 8888

If you place all configuration under directory /src/main/resources/config they will be automatically load after start.

micronaut-2-2

Here’s configuration file for department-service.

micronaut:
  server:
    port: -1
  router:
    static-resources:
      swagger:
        paths: classpath:META-INF/swagger
        mapping: /swagger/**
datasources:
  default:
    url: jdbc:postgresql://192.168.99.100:5432/micronaut?ssl=false
    username: micronaut
    password: 123456
    driverClassName: org.postgresql.Driver
jpa:
  default:
    packages-to-scan:
      - 'pl.piomin.services.department.model'
    properties:
      hibernate:
        hbm2ddl:
          auto: update
        show_sql: true
endpoints:
  info:
    enabled: true
    sensitive: false
eureka:
  client:
    registration:
      enabled: true
    defaultZone: "localhost:8761"

5. Running Eureka Server

Eureka server will also be run as Spring Boot application written in Kotlin.

@SpringBootApplication
@EnableEurekaServer
class DiscoveryApplication

fun main(args: Array<String>) {
    runApplication<DiscoveryApplication>(*args)
}

We also needs to include a single dependency spring-cloud-starter-netflix-eureka-server besides kotlin-stdlib-jdk8.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-stdlib-jdk8</artifactId>
	<version>${kotlin.version}</version>
</dependency>

We run standalone instance of Eureka on port 8761.

spring:
  application:
    name: discovery-service
server:
  port: 8761
eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/

6. Integrating Micronaut with Spring Cloud

The implementation of distributed configuration client is automatically included to Micronaut core. We only need to include module for service discovery.

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-discovery-client</artifactId>
</dependency>

We don’t have to place anything in the source code. All the features can be enabled via configuration settings. First, we need to enable config client by setting property micronaut.config-client.enabled to true. The next step is to enable specific implementation of config client – in that case Spring Cloud Config, and then set target url.

micronaut:
  application:
    name: department-service
  config-client:
    enabled: true
spring:
  cloud:
    config:
      enabled: true
      uri: http://localhost:8888/

Each application fetches properties from configuration server. The part of configuration responsible for enabling discovery based on Eureka server is visible below.

eureka:
  client:
    registration:
      enabled: true
    defaultZone: "localhost:8761"

7. Running applications

Kapt needs to be able to compile Kotlin code to Java succesfully. That’s why we place method inside class declaration, and annotate it with @JvmStatic. The main class visible below is also annotated with @OpenAPIDefinition in order to generate Swagger definition for API methods.

@OpenAPIDefinition(
        info = Info(
                title = "Departments Management",
                version = "1.0",
                description = "Department API",
                contact = Contact(url = "https://piotrminkowski.wordpress.com", name = "Piotr Mińkowski", email = "piotr.minkowski@gmail.com")
        )
)
open class DepartmentApplication {

    companion object {
        @JvmStatic
        fun main(args: Array<String>) {
            Micronaut.run(DepartmentApplication::class.java)
        }
    }
	
}

Here’s the controller class from department-service. It injects repository bean for database integration and EmployeeClient for HTTP communication with employee-service.

@Controller("/departments")
open class DepartmentController(private val logger: Logger = LoggerFactory.getLogger(DepartmentController::class.java)) {

    @Inject
    lateinit var repository: DepartmentRepository
    @Inject
    lateinit var employeeClient: EmployeeClient

    @Post
    fun add(@Body department: Department): Department {
        logger.info("Department add: {}", department)
        return repository.add(department)
    }

    @Get("/{id}")
    fun findById(id: Long): Department? {
        logger.info("Department find: id={}", id)
        return repository.findById(id)
    }

    @Get
    fun findAll(): List<Department> {
        logger.info("Department find")
        return repository.findAll()
    }

    @Get("/organization/{organizationId}")
    @ContinueSpan
    open fun findByOrganization(@SpanTag("organizationId") organizationId: Long): List<Department> {
        logger.info("Department find: organizationId={}", organizationId)
        return repository.findByOrganization(organizationId)
    }

    @Get("/organization/{organizationId}/with-employees")
    @ContinueSpan
    open fun findByOrganizationWithEmployees(@SpanTag("organizationId") organizationId: Long): List<Department> {
        logger.info("Department find: organizationId={}", organizationId)
        val departments = repository.findByOrganization(organizationId)
        departments.forEach { it.employees = employeeClient.findByDepartment(it.id) }
        return departments
    }

}

It is worth to take a look on HTTP client implementation. It has been discussed in the details in my last article about Micronaut Quick Guide to Microservice with Micronaut Framework.

@Client(id = "employee-service", path = "/employees")
interface EmployeeClient {

	@Get("/department/{departmentId}")
	fun findByDepartment(departmentId: Long): MutableList<Employee>
	
}

You can run all the microservice using IntelliJ. You may also build the whole project with Maven using mvn clean install command, and then run them using java -jar command. Thanks to maven-shade-plugin applications will be generated as uber jars. Then run them in the following order: config-service, discovery-service and microservices.

$ java -jar config-service/target/config-service-1.0-SNAPSHOT.jar
$ java -jar discovery-service/target/discovery-service-1.0-SNAPSHOT.jar
$ java -jar employee-service/target/employee-service-1.0-SNAPSHOT.jar
$ java -jar department-service/target/department-service-1.0-SNAPSHOT.jar
$ java -jar organization-service/target/organization-service-1.0-SNAPSHOT.jar

After you may take a look on Eureka dashboard available under address http://localhost:8761 to see the list of running services. You may also perform some tests by running HTTP API methods.

micronaut-2-3

Summary

The sample applications source code is available on GitHub in the repository sample-micronaut-microservices in the branch kotlin. You can refer to that repository for more implementation details that has not been included in the article.

Microservices Integration Tests with Hoverfly and Testcontainers

Building good integration tests of a system consisting of several microservices may be quite a challenge. Today I’m going to show you how to use such tools like Hoverfly and Testcontainers to implement such the tests. I have already written about Hoverfly in my previous articles, as well as about Testcontainers. If you are interested in some intro to these framework you may take a look on the following articles:

Today we will consider the system consisting of three microservices, where each microservice is developed by the different team. One of these microservices trip-management is integrating with two others: driver-management and passenger-management. The question is how to organize integration tests under these assumptions. In that case we can use one of interesting features provided by Hoverfly – an ability to run it as a remote proxy. What does it mean in practice? It is illustrated on the picture below. The same external instance of Hoverfly proxy is shared between all microservices during JUnit tests. Microservice driver-management and passenger-management are testing their own methods exposed for use by trip-management, but all the requests are sent through Hoverfly remote instance acts as a proxy. Hoverfly will capture all the requests and responses sent during the tests. On the other hand trip-management is also testing its methods, but the communication with other microservices is simulated by remote Hoverfly instance basing on previously captured HTTP traffic.

hoverfly-test-1.png

We will use Docker for running remote instance of Hoverfly proxy. We will also use Docker images of microservices during the tests. That’s why we need Testcontainers framework, which is responsible for running application container before starting integration tests. So, the first step is to build Docker image of driver-management and passenger-management microservices.

1. Building Docker Image

Assuming you have successfully installed Docker on your machine, and you have set environment variables DOCKER_HOST and DOCKER_CERT_PATH, you may use io.fabric:docker-maven-plugin for it. It is important to execute the build goal of that plugin just after package Maven phase, but before integration-test phase. Here’s the appropriate configuration inside Maven pom.xml.

<plugin>
	<groupId>io.fabric8</groupId>
	<artifactId>docker-maven-plugin</artifactId>
	<configuration>
		<images>
			
		</images>
	</configuration>
	<executions>
		<execution>
			<phase>pre-integration-test</phase>
			<goals>
				<goal>build</goal>
			</goals>
		</execution>
	</executions>
</plugin>

2. Application Integration Tests

Our integration tests should be run during integration-test phase, so they must not be executed during test, before building application fat jar and Docker image. Here’s the appropriate configuration with maven-surefire-plugin.

<plugin>
	<groupId>org.apache.maven.plugins</groupId>
	<artifactId>maven-surefire-plugin</artifactId>
	<configuration>
		<excludes>
			<exclude>pl.piomin.services.driver.contract.DriverControllerIntegrationTests</exclude>
		</excludes>
	</configuration>
	<executions>
		<execution>
			<id>integration-test</id>
			<goals>
				<goal>test</goal>
			</goals>
			<phase>integration-test</phase>
			<configuration>
				<excludes>
					<exclude>none</exclude>
				</excludes>
				<includes>
					<include>pl.piomin.services.driver.contract.DriverControllerIntegrationTests</include>
				</includes>
			</configuration>
		</execution>
	</executions>
</plugin>

3. Running Hoverfly

Before running any tests we need start instance of Hoverfly in proxy mode. To achieve it we use Hoverfly Docker image. Because Hoverfly has to forward requests to the downstream microservices by host name, we create Docker network and then run Hoverfly in this network.

$ docker network create tests
$ docker run -d --name hoverfly -p 8500:8500 -p 8888:8888 --network tests spectolabs/hoverfly

Hoverfly proxy is now available for me (I’m using Docker Toolbox) under address 192.168.99.100:8500. We can also take a look admin web console available under address http://192.168.99.100:8888. Under that address you can also access HTTP API, what is described later in the next section.

4. Including test dependencies

To enable Hoverfly and Testcontainers for our test we first need to include some dependencies to Maven pom.xml. Our sample application are built on top of Spring Boot, so we also include Spring Test project.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-test</artifactId>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>testcontainers</artifactId>
	<version>1.10.6</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>io.specto</groupId>
	<artifactId>hoverfly-java</artifactId>
	<version>0.11.1</version>
	<scope>test</scope>
</dependency>

5. Building integration tests on the provider site

Now, we can finally proceed to JUnit test implementation. Here’s the full source code of test for driver-management microservice, but some things needs to explained. Before running our test methods we first start Docker container of application using Testcontainers. We use GenericContainer annotated with @ClassRule for that. Testcontainers provides api for interaction with containers, so we can easily set target Docker network and container hostname. We will also wait until application container is ready for use by calling method waitingFor on GenericContainer.
The next step is to enable Hoverfly rule for our test. We will run it in capture mode. By default Hoverfly trying to start local proxy instance, that’s why we provide remote address of existing instance already started using Docker container.
The tests are pretty simple. We will call endpoints using Spring TestRestTemplate. Because the request must finally be proxied to application container we use its hostname as the target address. The whole traffic is captured by Hoverfly.

public class DriverControllerIntegrationTests {

    private TestRestTemplate template = new TestRestTemplate();

    @ClassRule
    public static GenericContainer appContainer = new GenericContainer<>("piomin/driver-management")
            .withCreateContainerCmdModifier(cmd -> cmd.withName("driver-management").withHostName("driver-management"))
            .withNetworkMode("tests")
            .withNetworkAliases("driver-management")
            .withExposedPorts(8080)
            .waitingFor(Wait.forHttp("/drivers"));

    @ClassRule
    public static HoverflyRule hoverflyRule = HoverflyRule
            .inCaptureMode("driver.json", HoverflyConfig.remoteConfigs().host("192.168.99.100"))
            .printSimulationData();

    @Test
    public void testFindNearestDriver() {
        Driver driver = template.getForObject("http://driver-management:8080/drivers/{locationX}/{locationY}", Driver.class, 40, 20);
        Assert.assertNotNull(driver);
        driver = template.getForObject("http://driver-management:8080/drivers/{locationX}/{locationY}", Driver.class, 10, 20);
        Assert.assertNotNull(driver);
    }

    @Test
    public void testUpdateDriver() {
        HttpHeaders headers = new HttpHeaders();
        headers.setContentType(MediaType.APPLICATION_JSON);
        DriverInput input = new DriverInput();
        input.setId(2L);
        input.setStatus(DriverStatus.UNAVAILABLE);
        HttpEntity<DriverInput> entity = new HttpEntity<>(input, headers);
        template.put("http://driver-management:8080/drivers", entity);
        input.setId(1L);
        input.setStatus(DriverStatus.AVAILABLE);
        entity = new HttpEntity<>(input, headers);
        template.put("http://driver-management:8080/drivers", entity);
    }

}

Now, you can execute the tests during application build using mvn clean verify command. The sample application source code is available on GitHub in repository sample-testing-microservices under branch remote.

6. Building integration tests on the consumer site

In the previous we have discussed the integration tests implemented on the consumer site. There are two microservices driver-management and passenger-management, that expose endpoints invoked by the third microservice trip-management. The traffic generated during the tests has already been captured by Hoverfly. It is very important thing in that sample, because each time you will build the newest version of microservice Hoverfly is refreshing the structure of previously recorded requests. Now, if we run the tests for consumer application (trip-management) it fully bases on the newest version of requests generated during tests by microservices on the provider site. You can check out the list of all requests captured by Hoverfly by calling endpoint http://192.168.99.100:8888/api/v2/simulation.
Here are the integration tests implemented inside trip-management. They are also use remote Hoverfly proxy instance. The only difference is in running mode, which is simulation. It tries to simulates requests sent to driver-management and passenger-management basing on the traffic captured by Hoverfly.

@SpringBootTest
@RunWith(SpringRunner.class)
@AutoConfigureMockMvc
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class TripIntegrationTests {

    ObjectMapper mapper = new ObjectMapper();

    @ClassRule
    public static HoverflyRule hoverflyRule = HoverflyRule
            .inSimulationMode(HoverflyConfig.remoteConfigs().host("192.168.99.100"))
            .printSimulationData();

    @Autowired
    MockMvc mockMvc;

    @Test
    public void test1CreateNewTrip() throws Exception {
        TripInput ti = new TripInput("test", 10, 20, "walker");
        mockMvc.perform(MockMvcRequestBuilders.post("/trips")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(ti)))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("NEW")))
                .andExpect(MockMvcResultMatchers.jsonPath("$.driverId", Matchers.any(Integer.class)));
    }

    @Test
    public void test2CancelTrip() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.put("/trips/cancel/1")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(new Trip())))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("IN_PROGRESS")))
                .andExpect(MockMvcResultMatchers.jsonPath("$.driverId", Matchers.any(Integer.class)));
    }

    @Test
    public void test3PayTrip() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.put("/trips/payment/1")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(new Trip())))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("PAYED")));
    }

}

Now, you can run command mvn clean verify on the root module. It runs the tests in the following order: driver-management, passenger-management and trip-management.

hoverfly-test-3

Testing Spring Boot Integration with Vault and Postgres using Testcontainers Framework

I have already written many articles, where I was using Docker containers for running some third-party solutions integrated with my sample applications. Building integration tests for such applications may not be an easy task without Docker containers. Especially, if our application integrates with databases, message brokers or some other popular tools. If you are planning to build such integration tests you should definitely take a look on Testcontainers (https://www.testcontainers.org/). Testcontainers is a Java library that supports JUnit tests, providing fast and lightweight way for running instances of common databases, Selenium web browsers, or anything else that can run in a Docker container. It provides modules for the most popular relational and NoSQL databases like Postgres, MySQL, Cassandra or Neo4j. It also allows to run popular products like Elasticsearch, Kafka, Nginx or HashiCorp’s Vault. Today I’m going to show you more advanced sample of JUnit tests that use Testcontainers to check out an integration between Spring Boot/Spring Cloud application, Postgres database and Vault. For the purposes of that example we will use the case described in one of my previous articles Secure Spring Cloud Microservices with Vault and Nomad. Let us recall that use case.
I described there how to use very interesting Vault feature called secret engines for generating database user credentials dynamically. I used Spring Cloud Vault module in my Spring Boot application to automatically integrate with that feature of Vault. The implemented mechanism is pretty easy. The application calls Vault secret engine before it tries to connect to Postgres database on startup. Vault is integrated with Postgres via secret engine, and that’s why it creates user with sufficient privileges on Postgres. Then, generated credentials are automatically injected into auto-configured Spring Boot properties used for connecting with database spring.datasource.username and spring.datasource.password. The following picture illustrates described solution.

testcontainers-1 (1).png

Ok, we know how it works, now the question is how to automatically test it. With Testcontainers it is possible with just a few lines of code.

1. Building application

Let’s begin from a short intro to the application code. It is very simple. Here’s the list of dependencies required for building application that exposes REST API, and integrates with Postgres and Vault.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-vault-config</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-vault-config-databases</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
	<groupId>org.postgresql</groupId>
	<artifactId>postgresql</artifactId>
	<version>42.2.5</version>
</dependency>

Application connects to Postgres, enables integration with Vault via Spring Cloud Vault, and automatically creates/updates tables on startup.

spring:
  application:
    name: callme-service
  cloud:
    vault:
      uri: http://192.168.99.100:8200
      token: ${VAULT_TOKEN}
      postgresql:
        enabled: true
        role: default
        backend: database
  datasource:
    url: jdbc:postgresql://192.168.99.100:5432/postgres
  jpa.hibernate.ddl-auto: update

It exposes the single endpoint. The following method is responsible for handling incoming requests. It just insert a record to database and return response with app name, version and id of inserted record.

@RestController
@RequestMapping("/callme")
public class CallmeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);
	
	@Autowired
	Optional<BuildProperties> buildProperties;
	@Autowired
	CallmeRepository repository;
	
	@GetMapping("/message/{message}")
	public String ping(@PathVariable("message") String message) {
		Callme c = repository.save(new Callme(message, new Date()));
		if (buildProperties.isPresent()) {
			BuildProperties infoProperties = buildProperties.get();
			LOGGER.info("Ping: name={}, version={}", infoProperties.getName(), infoProperties.getVersion());
			return infoProperties.getName() + ":" + infoProperties.getVersion() + ":" + c.getId();
		} else {
			return "callme-service:"  + c.getId();
		}
	}
	
}

2. Enabling Testcontainers

To enable Testcontainers for our project we need to include some dependencies to our Maven pom.xml. We have dedicated modules for Postgres and Vault. We also include Spring Boot Test dependency, because we would like to test the whole Spring Boot app.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-test</artifactId>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>vault</artifactId>
	<version>1.10.5</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>testcontainers</artifactId>
	<version>1.10.5</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>postgresql</artifactId>
	<version>1.10.5</version>
	<scope>test</scope>
</dependency>

3. Running Vault test container

Testcontainers framework supports JUnit 4/JUnit 5 and Spock. The Vault container can be started before tests if it is annotated with @Rule or @ClassRule. By default it uses version 0.7, but we can override it with newest version, which is 1.0.2. We also may set a root token, which is then required by Spring Cloud Vault for integration with Vault.

@ClassRule
public static VaultContainer vaultContainer = new VaultContainer<>("vault:1.0.2")
	.withVaultToken("123456")
	.withVaultPort(8200);

That root token can be overridden before starting JUnit test on the test class.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT, properties = {
    "spring.cloud.vault.token=123456"
})
public class CallmeTest { ... }

4. Running Postgres test container

As an alternative to @ClassRule, we can manually start the container in a @BeforeClass or @Before method in the test. With this approach you will also have to stop it manually in @AfterClass or @After method. We start Postgres container manually, because by default it is exposed on dynamically generated port, which need to be set for Spring Boot application before starting the test. The listen port is returned by method getFirstMappedPort invoked on PostgreSQLContainer.

private static PostgreSQLContainer postgresContainer = new PostgreSQLContainer()
	.withDatabaseName("postgres")
	.withUsername("postgres")
	.withPassword("postgres123");
	
@BeforeClass
public static void init() throws IOException, InterruptedException {
	postgresContainer.start();
	int port = postgresContainer.getFirstMappedPort();
	System.setProperty("spring.datasource.url", String.format("jdbc:postgresql://192.168.99.100:%d/postgres", postgresContainer.getFirstMappedPort()));
	// ...
}

@AfterClass
public static void shutdown() {
	postgresContainer.stop();
}

5. Integrating Vault and Postgres containers

Once we have succesfully started both Vault and Postgres containers, we need to integrate them via Vault secret engine. First, we need to enable database secret engine Vault. After that we must configure connection to Postgres. The last step is is to configure a role. A role is a logical name that maps to a policy used to generated those credentials. All these actions may be performed using Vault commands. You can launch command on Vault container using execInContainer method. Vault configuration commands should be executed just after Postgres container startup.

@BeforeClass
public static void init() throws IOException, InterruptedException {
	postgresContainer.start();
	int port = postgresContainer.getFirstMappedPort();
	System.setProperty("spring.datasource.url", String.format("jdbc:postgresql://192.168.99.100:%d/postgres", postgresContainer.getFirstMappedPort()));
	vaultContainer.execInContainer("vault", "secrets", "enable", "database");
	String url = String.format("connection_url=postgresql://{{username}}:{{password}}@192.168.99.100:%d?sslmode=disable", port);
	vaultContainer.execInContainer("vault", "write", "database/config/postgres", "plugin_name=postgresql-database-plugin", "allowed_roles=default", url, "username=postgres", "password=postgres123");
	vaultContainer.execInContainer("vault", "write", "database/roles/default", "db_name=postgres",
		"creation_statements=CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';GRANT SELECT, UPDATE, INSERT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";GRANT USAGE,  SELECT ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";",
		"default_ttl=1h", "max_ttl=24h");
}

6. Running application tests

Finally, we may run application tests. We just call the single endpoint exposed by the app using TestRestTemplate, and verify the output.

@Autowired
TestRestTemplate template;

@Test
public void test() {
	String res = template.getForObject("/callme/message/{message}", String.class, "Test");
	Assert.assertNotNull(res);
	Assert.assertTrue(res.endsWith("1"));
}

If you are interested what exactly happens during the test you can set a breakpoint inside test method and execute docker ps command manually.

testcontainers-2

Quick Guide to Microservices with Micronaut Framework

Micronaut framework has been introduced as an alternative to Spring Boot for building microservice applications. At first glance it is very similar to Spring. It also implements such patterns like dependency injection and inversion of control based on annotations, however it uses JSR-330 (java.inject) for doing it. It has been designed specially in order to building serverless functions, Android applications, and low memory-footprint microservices. This means that it should faster startup time, lower memory usage or easier unit testing than competitive frameworks. However, today I don’t want to focus on those characteristics of Micronaut. I’m going to show you how to build simple microservices-based system using this framework. You can easily compare it with Spring Boot and Spring Cloud by reading my previous article about the same subject Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud. Does Micronaut have a change to gain the same popularity as Spring Boot? Let’s find out.

Our sample system consists of three independent microservices that communicate with each other. All of them integrate with Consul in order to fetch shared configuration. After startup every single service will register itself in Consul. Applications organization-service and department-service call endpoints exposed by other microservices using Micronaut declarative HTTP client. The traces from communication are sending to Zipkin. The source code of sample applications is available on GitHub in repository sample-micronaut-microservices.

micronaut-arch (1).png

Step 1. Creating application

We need to start by including some dependencies to our Maven pom.xml. First let’s define BOM with the newest stable Micronaut version.

<properties>
	<exec.mainClass>pl.piomin.services.employee.EmployeeApplication</exec.mainClass>
	<micronaut.version>1.0.3</micronaut.version>
	<jdk.version>1.8</jdk.version>
</properties>
<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>io.micronaut</groupId>
			<artifactId>micronaut-bom</artifactId>
			<version>${micronaut.version}</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

The list of required dependencies isn’t very long. Also not all of them are required, but they will be useful in our demo. For example micronaut-management need to be included in case we would like to expose some built-in management and monitoring endpoints.

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-http-server-netty</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-inject</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-runtime</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-management</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-inject-java</artifactId>
	<scope>provided</scope>
</dependency>

To build application uber-jar we need configure plugin responsible for packaging JAR file with dependencies. It can be for example maven-shade-plugin. When building new application it is also worth to expose basic information about it under /info endpoint. As I have already mentioned Micronaut adds support for monitoring your app via HTTP endpoints after including artifact micronaut-management. Management endpoint are integrated with Micronaut security module, what means that you need to authenticate yourself to be able to access them. To simplify we can disable authentication for /info endpoint.

endpoints:
  info:
    enabled: true
    sensitive: false

We can customize /info endpoint by adding some supported info sources. This mechanism is very similar to Spring Boot Actuator approach. If git.properties file is available on the classpath, all the values inside file will be exposed by /info endpoint. The same situation applies to build-info.properties file, that needs to be placed inside META-INF directory. However, in comparison with Spring Boot we need to provide more configuration in pom.xml to generate and package those to application JAR. The following Maven plugins are responsible for generating required properties files.

<plugin>
	<groupId>pl.project13.maven</groupId>
	<artifactId>git-commit-id-plugin</artifactId>
	<version>2.2.6</version>
	<executions>
		<execution>
			<id>get-the-git-infos</id>
			<goals>
				<goal>revision</goal>
			</goals>
		</execution>
	</executions>
	<configuration>
		<verbose>true</verbose>
		<dotGitDirectory>${project.basedir}/.git</dotGitDirectory>
		<dateFormat>MM-dd-yyyy '@' HH:mm:ss Z</dateFormat>
		<generateGitPropertiesFile>true</generateGitPropertiesFile>
		<generateGitPropertiesFilename>src/main/resources/git.properties</generateGitPropertiesFilename>
		<failOnNoGitDirectory>true</failOnNoGitDirectory>
	</configuration>
</plugin>
<plugin>
	<groupId>com.rodiontsev.maven.plugins</groupId>
	<artifactId>build-info-maven-plugin</artifactId>
	<version>1.2</version>
	<configuration>
		<filename>classes/META-INF/build-info.properties</filename>
		<projectProperties>
			<projectProperty>project.groupId</projectProperty>
			<projectProperty>project.artifactId</projectProperty>
			<projectProperty>project.version</projectProperty>
		</projectProperties>
	</configuration>
	<executions>
		<execution>
			<phase>prepare-package</phase>
			<goals>
				<goal>extract</goal>
			</goals>
		</execution>
	</executions>
</plugin>
</plugins>

Now, our /info endpoint is able to print the most important information about our app including Maven artifact name, version, and last Git commit id.

micronaut-2

Step 2. Exposing HTTP endpoints

Micronaut provides their own annotations for pointing out HTTP endpoints and methods. As I have mentioned in the preface it also uses JSR-330 (java.inject) for dependency injection. Our controller class should be annotated with @Controller. We also have annotations for every HTTP method type. The path parameter is automatically mapped to the class method parameter by its name, what is a nice simplification in comparison to Spring MVC where we need to use @PathVariable annotation. The repository bean used for CRUD operations is injected into controller using @Inject annotation.

@Controller("/employees")
public class EmployeeController {

    private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);

    @Inject
    EmployeeRepository repository;

    @Post
    public Employee add(@Body Employee employee) {
        LOGGER.info("Employee add: {}", employee);
        return repository.add(employee);
    }

    @Get("/{id}")
    public Employee findById(Long id) {
        LOGGER.info("Employee find: id={}", id);
        return repository.findById(id);
    }

    @Get
    public List<Employee> findAll() {
        LOGGER.info("Employees find");
        return repository.findAll();
    }

    @Get("/department/{departmentId}")
    @ContinueSpan
    public List<Employee> findByDepartment(@SpanTag("departmentId") Long departmentId) {
        LOGGER.info("Employees find: departmentId={}", departmentId);
        return repository.findByDepartment(departmentId);
    }

    @Get("/organization/{organizationId}")
    @ContinueSpan
    public List<Employee> findByOrganization(@SpanTag("organizationId") Long organizationId) {
        LOGGER.info("Employees find: organizationId={}", organizationId);
        return repository.findByOrganization(organizationId);
    }

}

Our repository bean is pretty simple. It just provides in-memory store for Employee instances. We will mark it with @Singleton annotation.

@Singleton
public class EmployeeRepository {

	private List<Employee> employees = new ArrayList<>();
	
	public Employee add(Employee employee) {
		employee.setId((long) (employees.size()+1));
		employees.add(employee);
		return employee;
	}
	
	public Employee findById(Long id) {
		Optional<Employee> employee = employees.stream().filter(a -> a.getId().equals(id)).findFirst();
		if (employee.isPresent())
			return employee.get();
		else
			return null;
	}
	
	public List<Employee> findAll() {
		return employees;
	}
	
	public List<Employee> findByDepartment(Long departmentId) {
		return employees.stream().filter(a -> a.getDepartmentId().equals(departmentId)).collect(Collectors.toList());
	}
	
	public List<Employee> findByOrganization(Long organizationId) {
		return employees.stream().filter(a -> a.getOrganizationId().equals(organizationId)).collect(Collectors.toList());
	}
	
}

Micronaut is able to automatically generate Swagger YAML definition from our controller and methods basing on annotations. To achieve this, we first need to include the following dependency to our pom.xml.

<dependency>
	<groupId>io.swagger.core.v3</groupId>
	<artifactId>swagger-annotations</artifactId>
</dependency>

Then we should annotate application main class with @OpenAPIDefinition and provide some basic information like title or version number. Here’s employee application main class.

@OpenAPIDefinition(
    info = @Info(
        title = "Employees Management",
        version = "1.0",
        description = "Employee API",
        contact = @Contact(url = "https://piotrminkowski.wordpress.com", name = "Piotr Mińkowski", email = "piotr.minkowski@gmail.com")
    )
)
public class EmployeeApplication {

    public static void main(String[] args) {
        Micronaut.run(EmployeeApplication.class);
    }

}

Micronaut generates Swagger file basing on title and version fields inside @Info annotation. In that case our YAML definition file is available under name employees-management-1.0.yml, and will be generated to the META-INF/swagger directory. We can expose it outside application using HTTP endpoint. Here’s the appropriate configuration provided inside application.yml file.

micronaut:
  router:
    static-resources:
      swagger:
        paths: classpath:META-INF/swagger
        mapping: /swagger/**

Now, our file is available under path http://localhost:8080/swagger/employees-management-1.0.yml if run it on default 8080 port (we won’t do that, what I’m going to describe in the next part of this article). In comparison to Spring Boot, we don’t have such project like Swagger SpringFox for Micronaut, so we need to copy the content to online editor in order to see the graphical representation of Swagger YAML. Here’s it.

micronaut-1.PNG

Ok, since we have finished implementation of single microservice we may proceed to cloud-native features provided by Micronaut.

Step 3. Distributed configuration

Micronaut comes with built in APIs for doing distributed configuration. In fact, the only one available solution for now is distributed configuration based on HashiCorp’s Consul. Micronaut features for externalizing and adapting configuration to the environment are very similar to the Spring Boot approach. We also have application.yml and bootstrap.yml files, which can be used for application environment configuration. When using distributed configuration we first need to provide bootstrap.yml file on the classpath. It should contains an address of remote configuration server and preferred configuration store format. Of course, we first need to enable distributed configuration client by setting property micronaut.config-client.enabled to true. Here’s bootstrap.yml file for department-service.

micronaut:
  application:
    name: department-service
  config-client:
    enabled: true
consul:
  client:
    defaultZone: "192.168.99.100:8500"
    config:
      format: YAML

We can choose between properties, JSON, YAML and FILES (git2consul) configuration formats. I decided to use YAML. To apply this configuration on Consul we first need to start it locally in development mode. Because I’m using Docker Toolbox the default address of Consul is 192.168.99.100. The following Docker command will start single-node Consul instance and expose it on port 8500.

$ docker run -d --name consul -p 8500:8500 consul

Now, you can navigate to the tab Key/Value in Consul web console and create new file in YAML format /config/application.yml as shown below. Besides configuration for Swagger and /info management endpoint it also enables dynamic HTTP generation on startup by setting property micronaut.server.port to -1. Because, the name of file is application.yml it is by default shared between all microservices that uses Consul config client.

micronaut-2

Step 4. Service discovery

Micronaut gives you more options when configuring service discovery, than for distributed configuration. You can use Eureka, Consul, Kubernetes or just manually configure list of available services. However, I have observed that using Eureka discovery client together with Consul config client causes some errors on startup. In this example we will use Consul discovery. Because Consul address has been already provided in bootstrap.yml for every microservice, we just need to enable service discovery by adding the following lines to application.yml stored in Consul KV.

consul:
  client:
    registration:
      enabled: true

We should also include the following dependency to Maven pom.xml of every single application.

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-discovery-client</artifactId>
</dependency>

Finally, you can just run every microservice (you may run more than one instance locally, since HTTP port is generated dynamically). Here’s my list of running services registered in Consul.

micronaut-3

I have run two instances of employee-service as shown below.

micronaut-4

Step 5. Inter-service communication

Micronaut uses build-in HTTP client for load balancing between multiple instances of single microservice. By default it leverages Round Robin algorithm. We may choose between low-level HTTP client and declarative HTTP client with @Client. Micronaut declarative HTTP client concept is very similar to Spring Cloud OpenFeign. To use built-in client we first need to include the following dependency to project pom.xml.

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-http-client</artifactId>
</dependency>

Declarative client automatically integrates with a discovery client. It tries to find the service registered in Consul under the same name as value provided inside id field.

@Client(id = "employee-service", path = "/employees")
public interface EmployeeClient {

	@Get("/department/{departmentId}")
	List<Employee> findByDepartment(Long departmentId);
	
}

Now, the client bean needs to be injected into the controller.

@Controller("/departments")
public class DepartmentController {

	private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentController.class);
	
	@Inject
	DepartmentRepository repository;
	@Inject
	EmployeeClient employeeClient;
	
	@Post
	public Department add(@Body Department department) {
		LOGGER.info("Department add: {}", department);
		return repository.add(department);
	}
	
	@Get("/{id}")
	public Department findById(Long id) {
		LOGGER.info("Department find: id={}", id);
		return repository.findById(id);
	}
	
	@Get
	public List<Department> findAll() {
		LOGGER.info("Department find");
		return repository.findAll();
	}
	
	@Get("/organization/{organizationId}")
	@ContinueSpan
	public List<Department> findByOrganization(@SpanTag("organizationId") Long organizationId) {
		LOGGER.info("Department find: organizationId={}", organizationId);
		return repository.findByOrganization(organizationId);
	}
	
	@Get("/organization/{organizationId}/with-employees")
	@ContinueSpan
	public List<Department> findByOrganizationWithEmployees(@SpanTag("organizationId") Long organizationId) {
		LOGGER.info("Department find: organizationId={}", organizationId);
		List<Department> departments = repository.findByOrganization(organizationId);
		departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
		return departments;
	}
	
}

Step 6. Distributed tracing

Micronaut application can be easily integrated with Zipkin to send there traces with HTTP traffic automatically. To enable this feature we first need to include the following dependencies to pom.xml.

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-tracing</artifactId>
</dependency>
<dependency>
	<groupId>io.zipkin.brave</groupId>
	<artifactId>brave-instrumentation-http</artifactId>
	<scope>runtime</scope>
</dependency>
<dependency>
	<groupId>io.zipkin.reporter2</groupId>
	<artifactId>zipkin-reporter</artifactId>
	<scope>runtime</scope>
</dependency>
<dependency>
	<groupId>io.opentracing.brave</groupId>
	<artifactId>brave-opentracing</artifactId>
</dependency>

Then, we have to provide some configuration settings inside application.yml including Zipkin URL and sampler options. By setting property tracing.zipkin.sampler.probability to 1 we are forcing micronaut to send traces for every single request. Here’s our final configuration.

micronaut-5

During the tests of my application I have observed that using distributed configuration together with Zipkin tracing results in the problems in communication between microservice and Zipkin. The traces just do not appear in Zipkin. So, if you would like to test this feature now you must provide application.yml on the classpath and disable Consul distributed configuration for all your applications.

We can add some tags to the spans by using @ContinueSpan or @NewSpan annotations on methods.

After making some test calls of GET methods exposed by organization-service and department-service we may take a look on Zipkin web console, available under address http://192.168.99.100:9411. The following picture shows the list of all the traces sent to Zipkin by our microservices in 1 hour.

micronaut-7

We can check out the details of every trace by clicking on the element from the list. The following picture illustrates the timeline for HTTP method exposed by organization-service GET /organizations/{id}/with-departments-and-employees. This method finds the organization in the in-memory repository, and then calls HTTP method exposed by department-service GET /departments/organization/{organizationId}/with-employees. This method is responsible for finding all departments assigned to the given organization. It also needs to return employees within department, so it calls method GET /employees/department/{departmentId} from employee-service.

micronaut-8

We can also take a look on the details of every single call from the timeline.

micronaut-9

Conclusion

In comparison to Spring Boot Micronaut is still in the early stage of development. For example, I were not able to implement any application that could acts as an API gateway to our system, what can easily achieved with Spring using Spring Cloud Gateway or Spring Cloud Netflix Zuul. There are still some bugs that needs to be fixed. But above all that, Micronaut is now probably the most interesting micro-framework on the market. It implements most popular microservice patterns, provides integration with several third-party solutions like Consul, Eureka, Zipkin or Swagger, consumes less memory and starts faster than similar Spring Boot app. I will definitely follow the progress in Micronaut development closely.