Kotlin Microservice with Spring Boot

You may find many examples of microservices built with Spring Boot on my blog, but the most of them is written in Java. With the rise in popularity of Kotlin language it is more often used with Spring Boot for building backend services. Starting with version 5 Spring Framework has introduced first-class support for Kotlin. In this article I’m going to show you example of microservice build with Kotlin and Spring Boot 2. I’ll describe some interesting features of Spring Boot, which can treated as a set of good practices when building backend, REST-based microservices.

1. Configuration and dependencies

To use Kotlin in your Maven project you have to include plugin kotlin-maven-plugin, and /src/main/kotlin, /src/test/kotlin directories to the build configuration. We will also set -Xjsr305 compiler flag to strict. This option is responsible for checking support for JSR-305 annotations (for example @NotNull annotation).

<build>
	<sourceDirectory>${project.basedir}/src/main/kotlin</sourceDirectory>
	<testSourceDirectory>${project.basedir}/src/test/kotlin</testSourceDirectory>
	<plugins>
		<plugin>
			<groupId>org.jetbrains.kotlin</groupId>
			<artifactId>kotlin-maven-plugin</artifactId>
			<configuration>
				<args>
					<arg>-Xjsr305=strict</arg>
				</args>
				<compilerPlugins>
					<plugin>spring</plugin>
				</compilerPlugins>
			</configuration>
			<dependencies>
				<dependency>
					<groupId>org.jetbrains.kotlin</groupId>
					<artifactId>kotlin-maven-allopen</artifactId>
					<version>${kotlin.version}</version>
				</dependency>
			</dependencies>
		</plugin>
	</plugins>
</build>

We should also include some core Kotlin libraries like kotlin-stdlib-jdk8 and kotlin-reflect. They are provided by default for a Kotlin project on start.spring.io. For REST-based applications you will also need Jackson library used for JSON serialization/deserialization. Of course, we have to include Spring starters for Web application together with Actuator responsible for providing management endpoints.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
	<groupId>com.fasterxml.jackson.module</groupId>
	<artifactId>jackson-module-kotlin</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-reflect</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-stdlib-jdk8</artifactId>
</dependency>

We use the latest stable version of Spring Boot with Kotlin 1.2.71

<parent>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-parent</artifactId>
	<version>2.1.2.RELEASE</version>
</parent>
<properties>
	<java.version>1.8</java.version>
	<kotlin.version>1.2.71</kotlin.version>
</properties>

2. Building application

Let’s begin from the basics. If you are familiar with Spring Boot and Java, the biggest difference is in the main class declaration. You will call runApplication method outside Spring Boot application class. The main class, the same as in Java, is annotated with @SpringBootApplication.

@SpringBootApplication
class SampleSpringKotlinMicroserviceApplication

fun main(args: Array<String>) {
    runApplication<SampleSpringKotlinMicroserviceApplication>(*args)
}

Our sample application is very simple. It exposes some REST endpoints providing CRUD operations for model object. Even at this fragment of code illustrating controller implementation you can see some nice Kotlin features. We may use shortened function declaration with inferred return type. Annotation @PathVariable does not require any arguments. The input parameter name is considered to be the same as variable name. Of course, we are using the same annotations as with Java. In Kotlin, every property declared as having non-null type must be initialized in the constructor. So, if you are initializing it using dependency injection it has to declared as lateinit. Here’s the implementation of PersonController.

@RestController
@RequestMapping("/persons")
class PersonController {

    @Autowired
    lateinit var repository: PersonRepository

    @GetMapping("/{id}")
    fun findById(@PathVariable id: Int): Person? = repository.findById(id)

    @GetMapping
    fun findAll(): List<Person> = repository.findAll()

    @PostMapping
    fun add(@RequestBody person: Person): Person = repository.save(person)

    @PutMapping
    fun update(@RequestBody person: Person): Person = repository.update(person)

    @DeleteMapping("/{id}")
    fun remove(@PathVariable id: Int): Boolean = repository.removeById(id)

}

Kotlin automatically generates getters and setters for class properties declared as var. Also if you declare model as a data class it generate equals, hashCode, and toString methods. The declaration of our model class Person is very concise as shown below.

data class Person(var id: Int?, var name: String, var age: Int, var gender: Gender)

I have implemented my own in-memory repository class. I use Kotlin extensions for manipulating list of elements. This built-in Kotlin feature is similar to Java streams, with the difference that you don’t have to perform any conversion between Collection and Stream.

@Repository
class PersonRepository {
    val persons: MutableList<Person> = ArrayList()

    fun findById(id: Int): Person? {
        return persons.singleOrNull { it.id == id }
    }

    fun findAll(): List<Person> {
        return persons
    }

    fun save(person: Person): Person {
        person.id = (persons.maxBy { it.id!! }?.id ?: 0) + 1
        persons.add(person)
        return person
    }

    fun update(person: Person): Person {
        val index = persons.indexOfFirst { it.id == person.id }
        if (index >= 0) {
            persons[index] = person
        }
        return person
    }

    fun removeById(id: Int): Boolean {
        return persons.removeIf { it.id == id }
    }

}

The sample application source code is available on GitHub in repository https://github.com/piomin/sample-spring-kotlin-microservice.git.

3. Enabling Actuator endpoints

Since we have already included Spring Boot starter with Actuator into the application code, we can take advantage of its production-ready features. Spring Boot Actuator gives you very powerful tools for monitoring and managing your apps. You can provide advanced healthchecks, info endpoints or send metrics to numerous monitoring systems like InfluxDB. After including Actuator artifacts the only thing we have to do is to enable all its endpoint for our application via HTTP.

management.endpoints.web.exposure.include: '*'

We can customize Actuator endpoints to provide more details about our app. A good practice is to expose information about version and git commit to info endpoint. As usual Spring Boot provides auto-configuration for such features, so the only thing we need to do is to include some Maven plugins to build configuration in pom.xml. The goal build-info set for spring-boot-maven-plugin forces it to generate properties file with basic information about version. The file is located in directory META-INF/build-info.properties. Plugin git-commit-id-plugin will generate git.properties file in the root directory.

<plugin>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-maven-plugin</artifactId>
	<executions>
		<execution>
			<goals>
				<goal>build-info</goal>
			</goals>
		</execution>
	</executions>
</plugin>
<plugin>
	<groupId>pl.project13.maven</groupId>
	<artifactId>git-commit-id-plugin</artifactId>
	<configuration>
		<failOnNoGitDirectory>false</failOnNoGitDirectory>
	</configuration>
</plugin>

Now you should just build your application using mvn clean install command and then run it.

$ java -jar target\sample-spring-kotlin-microservice-1.0-SNAPSHOT.jar

The info endpoint is available under address http://localhost:8080/actuator/info. It exposes all interesting information for us.

{
	"git":{
		"commit":{
			"time":"2019-01-14T16:20:31Z",
			"id":"f7cb437"
		},
		"branch":"master"
	},
	"build":{
		"version":"1.0-SNAPSHOT",
		"artifact":"sample-spring-kotlin-microservice",
		"name":"sample-spring-kotlin-microservice",
		"group":"pl.piomin.services",
		"time":"2019-01-15T09:18:48.836Z"
	}
}

4. Enabling API documentation

Build info and git properties may be easily injected into the application code. It can be useful in some cases. One of that case is if you have enabled auto-generated API documentation. The most popular tools using for it is Swagger. You can easily integrate Swagger2 with Spring Boot using SpringFox Swagger project. First, you need to include the following dependencies to your pom.xml.

<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.9.2</version>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger-ui</artifactId>
	<version>2.9.2</version>
</dependency>

Then, you should enable Swagger by annotating configuration class with @EnableSwagger2. Required informations are available inside beans BuildProperties and GitProperties. We just have to inject them into Swagger configuration class as shown below. We set them as optional to prevent from application startup failure in case they are not present on classpath.

@Configuration
@EnableSwagger2
class SwaggerConfig {

    @Autowired
    lateinit var build: Optional<BuildProperties>
    @Autowired
    lateinit var git: Optional<GitProperties>

    @Bean
    fun api(): Docket {
        var version = "1.0"
        if (build.isPresent && git.isPresent) {
            var buildInfo = build.get()
            var gitInfo = git.get()
            version = "${buildInfo.version}-${gitInfo.shortCommitId}-${gitInfo.branch}"
        }
        return Docket(DocumentationType.SWAGGER_2)
                .apiInfo(apiInfo(version))
                .select()
                .apis(RequestHandlerSelectors.any())
                .paths{ it.equals("/persons")}
                .build()
                .useDefaultResponseMessages(false)
                .forCodeGeneration(true)
    }

    @Bean
    fun uiConfig(): UiConfiguration {
        return UiConfiguration(java.lang.Boolean.TRUE, java.lang.Boolean.FALSE, 1, 1, ModelRendering.MODEL, java.lang.Boolean.FALSE, DocExpansion.LIST, java.lang.Boolean.FALSE, null, OperationsSorter.ALPHA, java.lang.Boolean.FALSE, TagsSorter.ALPHA, UiConfiguration.Constants.DEFAULT_SUBMIT_METHODS, null)
    }

    private fun apiInfo(version: String): ApiInfo {
        return ApiInfoBuilder()
                .title("API - Person Service")
                .description("Persons Management")
                .version(version)
                .build()
    }

}

The documentation is available under context path /swagger-ui.html. Besides API documentation is displays the full information about application version, git commit id and branch name.

kotlin-microservices-1.PNG

5. Choosing your app server

Spring Boot Web can be ran on three different embedded servers: Tomcat, Jetty or Undertow. By default it uses Tomcat. To change the default server you just need include the suitable Spring Boot starter and exclude spring-boot-starter-tomcat. The good practice may be to enable switching between servers during application build. You can achieve it by declaring Maven profiles as shown below.

<profiles>
	<profile>
		<id>tomcat</id>
		<activation>
			<activeByDefault>true</activeByDefault>
		</activation>
		<dependencies>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-web</artifactId>
			</dependency>
		</dependencies>
	</profile>
	<profile>
		<id>jetty</id>
		<dependencies>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-web</artifactId>
				<exclusions>
					<exclusion>
						<groupId>org.springframework.boot</groupId>
						<artifactId>spring-boot-starter-tomcat</artifactId>
					</exclusion>
				</exclusions>
			</dependency>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-jetty</artifactId>
			</dependency>
		</dependencies>
	</profile>
	<profile>
		<id>undertow</id>
		<dependencies>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-web</artifactId>
				<exclusions>
					<exclusion>
						<groupId>org.springframework.boot</groupId>
						<artifactId>spring-boot-starter-tomcat</artifactId>
					</exclusion>
				</exclusions>
			</dependency>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-undertow</artifactId>
			</dependency>
		</dependencies>
	</profile>
</profiles>

Now, if you would like to enable other server than Tomcat for your application you should activate the appropriate profile during Maven build.

$ mvn clean install -Pjetty

Conclusion

Development of microservices using Kotlin and Spring Boot is nice and simple. Basing on the sample application I have introduces the main Spring Boot features for Kotlin. I also described some good practices you may apply to your microservices when building it using Spring Boot and Kotlin. You can compare described approach with some other micro-frameworks used with Kotlin, for example Ktor described in one of my previous articles Kotlin Microservices with Ktor.

Advertisements

Running Java Microservices on OpenShift using Source-2-Image

One of the reason you would prefer OpenShift instead of Kubernetes is the simplicity of running new applications. When working with plain Kubernetes you need to provide already built image together with the set of descriptor templates used for deploying it. OpenShift introduces Source-2-Image feature used for building reproducible Docker images from application source code. With S2I you don’t have provide any Kubernetes YAML templates or build Docker image by yourself, OpenShift will do it for you. Let’s see how it works. The best way to test it locally is via Minishift. But the first step is to prepare sample applications source code.

1. Prepare application code

I have already described how to run your Java applications on Kubernetes in one of my previous articles Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker. We will use the same source code as used in that article now, so you would be able to compare those two different approaches. Our source code is available on GitHub in repository sample-spring-microservices-new. We will modify a little the version used in Kubernetes by removing Spring Cloud Kubernetes library and including some additional resources. The current version is available in the branch openshift.
Our sample system consists of three microservices which communicate with each other and use Mongo database backend. Here’s the diagram that illustrates our architecture.

s2i-1

Every microservice is a Spring Boot application, which uses Maven as a built tool. After including spring-boot-maven-plugin it is able to generate single fat jar with all dependencies, which is required by source-2-image builder.

<build>
	<plugins>
		<plugin>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-maven-plugin</artifactId>
		</plugin>
	</plugins>
</build>

Every application includes starters for Spring Web, Spring Actuator and Spring Data MongoDB for integration with Mongo database. We will also include libraries for generating Swagger API documentation, and Spring Cloud OpenFeign for these applications which call REST endpoints exposed by other microservices.

<dependencies>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-web</artifactId>
	</dependency>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-actuator</artifactId>
	</dependency>
	<dependency>
		<groupId>io.springfox</groupId>
		<artifactId>springfox-swagger2</artifactId>
		<version>2.9.2>/version<
	</dependency>
	<dependency>
		<groupId>io.springfox</groupId>
		<artifactId>springfox-swagger-ui</artifactId>
		<version>2.9.2</version>
	</dependency>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-data-mongodb</artifactId>
	</dependency>
</dependencies>

Every Spring Boot application exposes REST API for simple CRUD operations on a given resource. The Spring Data repository bean is injected into the controller.

@RestController
@RequestMapping(“/employee”)
public class EmployeeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);
	
	@Autowired
	EmployeeRepository repository;
	
	@PostMapping("/")
	public Employee add(@RequestBody Employee employee) {
		LOGGER.info("Employee add: {}", employee);
		return repository.save(employee);
	}
	
	@GetMapping("/{id}")
	public Employee findById(@PathVariable("id") String id) {
		LOGGER.info("Employee find: id={}", id);
		return repository.findById(id).get();
	}
	
	@GetMapping("/")
	public Iterable<Employee> findAll() {
		LOGGER.info("Employee find");
		return repository.findAll();
	}
	
	@GetMapping("/department/{departmentId}")
	public List<Employee> findByDepartment(@PathVariable("departmentId") Long departmentId) {
		LOGGER.info("Employee find: departmentId={}", departmentId);
		return repository.findByDepartmentId(departmentId);
	}
	
	@GetMapping("/organization/{organizationId}")
	public List<Employee> findByOrganization(@PathVariable("organizationId") Long organizationId) {
		LOGGER.info("Employee find: organizationId={}", organizationId);
		return repository.findByOrganizationId(organizationId);
	}
	
}

The application expects to have environment variables pointing to the database name, user and password.

spring:
  application:
    name: employee
  data:
    mongodb:
      uri: mongodb://${MONGO_DATABASE_USER}:${MONGO_DATABASE_PASSWORD}@mongodb/${MONGO_DATABASE_NAME}

Inter-service communication is realized through OpenFeign declarative REST client. It is included in department and organization microservices.

@FeignClient(name = "employee", url = "${microservices.employee.url}")
public interface EmployeeClient {

	@GetMapping("/employee/organization/{organizationId}")
	List<Employee> findByOrganization(@PathVariable("organizationId") String organizationId);
	
}

The address of the target service accessed by Feign client is set inside application.yml. The communication is realized via OpenShift/Kubernetes services. The name of each service is also injected through an environment variable.

spring:
  application:
    name: organization
  data:
    mongodb:
      uri: mongodb://${MONGO_DATABASE_USER}:${MONGO_DATABASE_PASSWORD}@mongodb/${MONGO_DATABASE_NAME}
microservices:
  employee:
    url: http://${EMPLOYEE_SERVICE}:8080
  department:
    url: http://${DEPARTMENT_SERVICE}:8080

2. Running Minishift

To run Minishift locally you just have to download it from that site, copy minishift.exe (for Windows) to your PATH directory and start using minishift start command. For more details you may refer to my previous article about OpenShift and Java applications Quick guide to deploying Java apps on OpenShift. The current version of Minishift used during writing this article is 1.29.0.
After starting Minishift we need to run some additional oc commands to enable source-2-image for Java apps. First, we add some privileges to user admin to be able to access project openshift. In this project OpenShift stores all the build-in templates and image streams used, for example as S2I builders. Let’s begin from enable admin-user addon.

$ minishift addons apply admin-user

Thanks to that plugin we are able to login to Minishift as cluster admin. Now, we can grant role cluster-admin to user admin.

$ oc login -u system:admin
$ oc adm policy add-cluster-role-to-user cluster-admin admin
$ oc login -u admin -p admin

After that, you can login to web console using credentials admin/admin. You should be able to see project openshift. It is not all. The image used for building runnable Java apps (openjdk18-openshift) is not available by default on Minishift. We can import it manually from RedHat registry using oc import-image command or just enable and apply plugin xpaas. I prefer the second option.

$ minishift addons apply xpaas

Now, you can go to Minishift web console (for me available under address https://192.168.99.100:8443), select project openshift and then navigate to Builds -> Images. You should see the image stream redhat-openjdk18-openshift on the list.

s2i-2

The newest version of that image is 1.3. Surprisingly it is not the newest version on OpenShift Container Platform. There you have version 1.5. However, the newest versions of builder images has been moved to registry.redhat.io, which requires authentication.

3. Deploying Java app using S2I

We are finally able to deploy our app on Minishift with S2I builder. The application source code is ready, and the same with Minishift instance. The first step is to deploy an instance of MongoDB. It is very easy with OpenShift, because Mongo template is available in built-in service catalog. We can provide our own configuration settings or left default values. What’s important for us, OpenShift generates secret, by default available under the name mongodb.

s2i-3

The S2I builder image provided by OpenShift may be used by through the image stream redhat-openjdk18-openshift. This image is intended for use with Maven-based Java standalone projects that are run via main class, for example Spring Boot applications. If you would not provide any builder during creating new app the type of application is auto-detected by OpenShift, and source code written Java it will be jee deployed on WildFly server. The current version of the Java S2I builder image supports OpenJDK 1.8, Jolokia 1.3.5, and Maven 3.3.9-2.8.
Let’s create our first application on OpenShift. We begin from microservice employee. Under normal circumstances each microservice would be located in separated Git repository. In our sample all of them are placed in the single repository, so we have provide the location of current app by setting parameter --context-dir. We will also override default branch to openshift, which has been created for the purposes of this article.

$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=employee --context-dir=employee

All our microservices are connecting to Mongo database, so we also have to inject connection settings and credentials into application pod. It can achieved by injecting mongodb secret to BuildConfig object.

$ oc set env bc/employee --from="secret/mongodb" --prefix=MONGO_

BuildConfig is one of the OpenShift object created after running command oc new-app. It also creates DeploymentConfig with deployment definition, Service, and ImageStream with newest Docker image of application. After creating application a new build is running. First, it download source code from Git repository, then it builds it using Maven, assembles build results into the Docker image, and finally saves image in registry.
Now, we can create the next application – department. For simplification, all three microservices are connecting to the same database, which is not recommended under normal circumstances. In that case the only difference between department and employee app is the environment variable EMPLOYEE_SERVICE set as parameter on oc new-app command.

$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=department --context-dir=department-service -e EMPLOYEE_SERVICE=employee 

The same as before we also inject mongodb secret into BuildConfig object.

$ oc set env bc/department --from="secret/mongodb" --prefix=MONGO_

A build is starting just after creating a new application, but we can also start it manually by executing the following running command.

$ oc start-build department

Finally, we are deploying the last microservice. Here are the appropriate commands.

$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=organization --context-dir=organization-service -e EMPLOYEE_SERVICE=employee -e DEPARTMENT_SERVICE=department
$ oc set env bc/organization --from="secret/mongodb" --prefix=MONGO_

4. Deep look into created OpenShift objects

The list of builds may be displayed on web console under section Builds -> Builds. As you can see on the picture below there are three BuildConfig objects available – each one for the single application. The same list can be displayed using oc command oc get bc.

s2i-4

You can take a look on build history by selecting one of the element from the list. You can also start a new by clicking button Start Build as shown below.

s2i-5

We can always display YAML configuration file with BuildConfig definition. But it is also possible to perform the similar action using web console. The following picture shows the list of environment variables injected from mongodb secret into the BuildConfig object.

s2i-6.PNG

Every build generates Docker image with application and saves it in Minishift internal registry. Minishift internal registry is available under address 172.30.1.1:5000. The list of available image streams is available under section Builds -> Images.

s2i-7

Every application is automatically exposed on ports 8080 (HTTP), 8443 (HTTPS) and 8778 (Jolokia) via services. You can also expose these services outside Minishift by creating OpenShift Route using oc expose command.

s2i-8

5. Testing the sample system

To proceed with the tests we should first expose our microservices outside Minishift. To do that just run the following commands.

$ oc expose svc employee
$ oc expose svc department
$ oc expose svc organization

After that we can access applications on the address http://${APP_NAME}-${PROJ_NAME}.${MINISHIFT_IP}.nip.io as shown below.

s2i-9

Each microservice provides Swagger2 API documentation available on page swagger-ui.html. Thanks to that we can easily test every single endpoint exposed by the service.

s2i-10

It’s worth notice that every application making use of three approaches to inject environment variables into the pod:

  1. It stores version number in source code repository inside the file .s2i/environment. S2I builder reads all the properties defined inside that file and set them as environment variables for builder pod, and then application pod. Our property name is VERSION, which is injected using Spring @Value, and set for Swagger API (the code is visible below).
  2. I have already set the names of dependent services as ENV vars during executing command oc new-app for department and organization apps.
  3. I have also inject MongoDB secret into every BuildConfig object using oc set env command.
@Value("${VERSION}")
String version;

public static void main(String[] args) {
	SpringApplication.run(DepartmentApplication.class, args);
}

@Bean
public Docket swaggerApi() {
	return new Docket(DocumentationType.SWAGGER_2)
		.select()
			.apis(RequestHandlerSelectors.basePackage("pl.piomin.services.department.controller"))
			.paths(PathSelectors.any())
		.build()
		.apiInfo(new ApiInfoBuilder().version(version).title("Department API").description("Documentation Department API v" + version).build());
}

Conclusion

Today I show you that deploying your applications on OpenShift may be very simple thing. You don’t have to create any YAML descriptor files or build Docker images by yourself to run your app. It is built directly from your source code. You can compare it with deployment on Kubernetes described in one of my previous articles Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker.

RabbitMQ Cluster with Consul and Vault

Almost two years ago I wrote an article about RabbitMQ clustering RabbitMQ in cluster. It was one of the first post on my blog, and it’s really hard to believe it has been two years since I started this blog. Anyway, one of the question about the topic described in the mentioned article inspired me to return to that subject one more time. That question pointed to the problem of an approach to setting up the cluster. This approach assumes that we are manually attaching new nodes to the cluster by executing the command rabbitmqctl join_cluster with cluster name as a parameter. If I remember correctly it was the only one available method of creating cluster at that time. Today we have more choices, what illustrates an evolution of RabbitMQ during last two years. RabbitMQ cluster can be formed in a number of ways:

  • Manually with rabbitmqctl (as described in my article RabbitMQ in cluster)
  • Declaratively by listing cluster nodes in config file
  • Using DNS-based discovery
  • Using AWS (EC2) instance discovery via a dedicated plugin
  • Using Kubernetes discovery via a dedicated plugin
  • Using Consul discovery via a dedicated plugin
  • Using etcd-based discovery via a dedicated plugin

Today, I’m going to show you how to create RabbitMQ cluster using service discovery based on HashiCorp’s Consul. Additionally, we will include Vault to our architecture in order to use its interesting feature called secrets engine for managing credentials used for accessing RabbitMQ. We will setup this sample on the local machine using Docker images of RabbitMQ, Consul and Vault. Finally, we will test our solution using simple Spring Boot application that sends and listens for incoming messages to the cluster. That application is available on GitHub repository sample-haclustered-rabbitmq-service in the branch consul.

Architecture

We use Vault as a credentials manager when applications try to authenticate against RabbitMQ node or user tries to login to RabbitMQ web admin console. Each RabbitMQ node registers itself after startup in Consul and retrieves list of nodes running inside a cluster. Vault is integrated with RabbitMQ using dedicated secrets engine. Here’s an architecture of our sample solution.

rabbit-consul-logo (1)

1. Configure RabbitMQ Consul plugin

The integration between RabbitMQ and Consul is realized via plugin rabbitmq-peer-discovery-consul. This plugin is not enabled by default on the official RabbitMQ Docker container. So, the first step is to build our own Docker image based on official RabbitMQ image that installs and enables required plugin. By default, RabbitMQ main configuration file is available under path /etc/rabbitmq/rabbitmq.conf inside Docker container. To override it we just use the COPY statement as shown below. The following Dockerfile definition takes RabbitMQ with management web console as base image and enabling rabbitmq_peer_discovery_consul plugin.

FROM rabbitmq:3.7.8-management
COPY rabbitmq.conf /etc/rabbitmq
RUN rabbitmq-plugins enable --offline rabbitmq_peer_discovery_consul

Now, let’s take a closer look on our plugin configuration settings. Because I run Docker on Windows Consul is not available under default localhost address, but on 192.168.99.100. So, first we need to set that IP address using property cluster_formation.consul.host. We also need to set Consul as a default peer discovery implementation by setting the name of plugin for property cluster_formation.peer_discovery_backend. Finally, we have to set two additional properties to make it work in our local Docker environment. It is related with the address of RabbitMQ node sent to Consul during registration process. It is important to compute it properly, and not to send for example localhost. After setting property cluster_formation.consul.svc_addr_use_nodename to false node will register itself using host name instead of node name. We can set the name of host for container inside its running command. Here’s my full RabbitMQ configuration file used in demo for this article.

loopback_users.guest = false
listeners.tcp.default = 5672
hipe_compile = false
management.listener.port = 15672
management.listener.ssl = false
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_consul
cluster_formation.consul.host = 192.168.99.100
cluster_formation.consul.svc_addr_auto = true
cluster_formation.consul.svc_addr_use_nodename = false

After saving the configuration visible above in the file rabbitmq.conf we can proceed to building our custom Docker image with RabbitMQ. This image is available in my Docker repository under alias piomin/rabbitmq, but you can also build it by yourself from Dockerfile by executing the following command.

$ docker build -t piomin/rabbitmq:1.0 .
Sending build context to Docker daemon  3.072kB
Step 1 : FROM rabbitmq:3.7.8-management
 ---> d69a5113ceae
Step 2 : COPY rabbitmq.conf /etc/rabbitmq
 ---> aa306ef88085
Removing intermediate container fda0e21178f9
Step 3 : RUN rabbitmq-plugins enable --offline rabbitmq_peer_discovery_consul
 ---> Running in 0892a42bffef
The following plugins have been configured:
  rabbitmq_management
  rabbitmq_management_agent
  rabbitmq_peer_discovery_common
  rabbitmq_peer_discovery_consul
  rabbitmq_web_dispatch
Applying plugin configuration to rabbit@fda0e21178f9...
The following plugins have been enabled:
  rabbitmq_peer_discovery_common
  rabbitmq_peer_discovery_consul

set 5 plugins.
Offline change; changes will take effect at broker restart.
 ---> cfe73f9d9904
Removing intermediate container 0892a42bffef
Successfully built cfe73f9d9904

2. Running RabbitMQ cluster on Docker

In the previous step we have succesfully created Docker image of RabbitMQ configured to run in cluster mode using Consul discovery. Before running this image we need to start instance of Consul. Here’s the command that starts Docker container with Consul and exposing it on port 8500.

$ docker run -d --name consul -p 8500:8500 consul

We will also create Docker network to enable communication between containers by hostname. It is required in this scenario, because each RabbitMQ container is register itself using container hostname.

$ docker network create rabbitmq

Now, we can run our three clustered RabbitMQ containers. We will set unique hostname for every single container (using -h option) and set the same Docker network everywhere. We also have to set container environment variable RABBITMQ_ERLANG_COOKIE.

$ docker run -d --name rabbit1 -h rabbit1 --network rabbitmq -p 30000:5672 -p 30010:15672 -e RABBITMQ_ERLANG_COOKIE='rabbitmq' piomin/rabbitmq:1.0
$ docker run -d --name rabbit2 -h rabbit2 --network rabbitmq -p 30001:5672 -p 30011:15672 -e RABBITMQ_ERLANG_COOKIE='rabbitmq' piomin/rabbitmq:1.0
$ docker run -d --name rabbit3 -h rabbit3 --network rabbitmq -p 30002:5672 -p 30012:15672 -e RABBITMQ_ERLANG_COOKIE='rabbitmq' piomin/rabbitmq:1.0

After running all three instances of RabbitMQ we can first take a look on Consul web console. You should see there the new service called rabbitmq. This value is the default name of cluster set by RabbitMQ Consul plugin. We can override inside rabbitmq.conf using cluster_formation.consul.svc property.

rabbit-consul-1

We can check out if cluster has been succesfully started using RabbitMQ web management console. Every node is exposing it. I just had to override default port 15672 to avoid port conflicts between three running instances.

rabbit-consul-10

3. Integrating RabbitMQ with Vault

In the two previous steps we have succesfully run the cluster of three RabbitMQ nodes based on Consul discovery. Now, we will include Vault to our sample system to dynamically generate user credentials. Let’s begin from running Vault on Docker. You can find detailed information about it in my previous article Secure Spring Cloud Microservices with Vault and Nomad. We will run Vault in development mode using the following command.

$ docker run --cap-add=IPC_LOCK -d --name vault -p 8200:8200 vault

You can copy the root token from container logs using docker logs -f vault command. Then you have to login to Vault web console available under address http://192.168.99.100:8200 using this token and enable RabbitMQ secret engine as shown below.

rabbit-consul-2

And confirm.

rabbit-consul-3

You can easily run Vault commands using terminal provided by web admin console or do the same thing using HTTP API. The first command visible below is used for writing connection details. We just need to pass RabbitMQ address and admin user credentials. The provided configuration settings points to #1 RabbitMQ node, but the changes are then replicated to the whole cluster.

$ vault write rabbitmq/config/connection connection_uri="http://192.168.99.100:30010" username="guest" password="guest"

The next step is to configure a role that maps a name in Vault to virtual host permissions.

$ vault write rabbitmq/roles/default vhosts='{"/":{"write": ".*", "read": ".*"}}'

We can test our newly created configuration by running command vault read rabbitmq/creds/default as shown below.

rabbit-consul-4

4. Sample application

Our sample application is pretty simple. It consists of two modules. First of them sender is responsible for sending messages to RabbitMQ, while second listener for receiving incoming messages. Both of them are Spring Boot applications that integrates with RabbitMQ and Vault using the following dependencies.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-vault-config-rabbitmq</artifactId>
	<version>2.0.2.RELEASE</version>
</dependency>

We need to provide some configuration settings in bootstrap.yml file to integrate our application with Vault. First, we need to enable plugin for that integration by setting property spring.cloud.vault.rabbitmq.enabled to true. Of course, Vault address and root token are required. It is also important to set property spring.cloud.vault.rabbitmq.role with the name of Vault role configured in step 3. Spring Cloud Vault injects username and password generated by Vault to the application properties spring.rabbitmq.username and spring.rabbitmq.password, so the only thing we need to configure in bootstrap.yml file is the list of available cluster nodes.

spring:
  rabbitmq:
    addresses: 192.168.99.100:30000,192.168.99.100:30001,192.168.99.100:30002
  cloud:
    vault:
      uri: http://192.168.99.100:8200
      token: s.7DaENeiqLmsU5ZhEybBCRJhp
      rabbitmq:
        enabled: true
        role: default
        backend: rabbitmq

For the test purposes you should enable high-available queues on RabbitMQ. For instructions how to configure them using policies you can refer to my article RabbitMQ in cluster. The application works at the level of exchanges. Auto-configured connection factory is injected into the application and set for RabbitTemplate bean.

@SpringBootApplication
public class Sender {
	
	private static final Logger LOGGER = LoggerFactory.getLogger("Sender");
	
	@Autowired
	RabbitTemplate template;

	public static void main(String[] args) {
		SpringApplication.run(Sender.class, args);
	}

	@PostConstruct
	public void send() {
		for (int i = 0; i < 1000; i++) {
			int id = new Random().nextInt(100000);
			template.convertAndSend(new Order(id, "TEST"+id, OrderType.values()[(id%2)]));
		}
		LOGGER.info("Sending completed.");
	}
    
    @Bean
    public RabbitTemplate template(ConnectionFactory connectionFactory) {
        RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
        rabbitTemplate.setExchange("ex.example");
        return rabbitTemplate;
    }
    
}

Our listener app is connected only to the third node of the cluster (spring.rabbitmq.addresses=192.168.99.100:30002). However, the test queue is mirrored between all clustered nodes, so it is able to receive messages sent by sender app. You can easily test using my sample applications.

@SpringBootApplication
@EnableRabbit
public class Listener {

	private static final  Logger LOGGER = LoggerFactory.getLogger("Listener");

	private Long timestamp;

	public static void main(String[] args) {
		SpringApplication.run(Listener.class, args);
	}

	@RabbitListener(queues = "q.example")
	public void onMessage(Order order) {
		if (timestamp == null)
			timestamp = System.currentTimeMillis();
		LOGGER.info((System.currentTimeMillis() - timestamp) + " : " + order.toString());
	}

	@Bean
	public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
		SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
		factory.setConnectionFactory(connectionFactory);
		factory.setConcurrentConsumers(10);
		factory.setMaxConcurrentConsumers(20);
		return factory;
	}
	
}

Secure Spring Cloud Microservices with Vault and Nomad

One of the significant topics related to microservices security is managing and protecting sensitive data like tokens, passwords or certificates used by your application. As a developer you probably often implement a software that connects with external databases, message brokers or just the other applications. How do you store the credentials used by your application? To be honest, most of the software code I have seen in my life just stored a sensitive data as a plain text in the configuration files. Thanks to that, I could always be able to retrieve the credentials to every database I needed at a given time just by looking at the application source code. Of course, we can always encrypt sensitive data, but if we working with many microservices having separated databases I may not be very comfortable solution.

Today I’m going to show you how to integrate you Spring Boot application with HashiCorp’s Vault in order to store your sensitive data properly. The first good news is that you don’t have to create any keys or certificates for encryption and decryption, because Vault will do it in your place. In this article in a few areas I’ll refer to my previous article about HashiCorp’s solutions Deploying Spring Cloud Microservices on HashiCorp’s Nomad. Now, as then, I also deploy my sample applications on Nomad to take an advantage of build-in integration between those two very interesting HashiCorp’s tools. We will also use another HashiCorp’s solution for service discovery in inter-service communication – Consul. It’s also worth mentioning that Spring Cloud provides a dedicated project for integration with Vault – Spring Cloud Vault.

Architecture

The sample presented in this article will consists of two applications deployed on HashiCorp’s Nomad callme-service and caller-service. Microservice caller-service is calling endpoint exposed by callme-service. An inter-service communication is performed using the name of target application registered in Consul server. Microservice callme-service will store the history of all interactions triggered by caller-service in database. The credentials to database are stored on Vault. Nomad is integrated with Vault and store root token, which is not visible by the applications. The architecture of described solution is visible on the following picture.

vault-1

The current sample is pretty similar to the sample presented in my article Deploying Spring Cloud Microservices on Hashicorp’s Nomad. It is also available in the same repository on GitHub sample-nomad-java-service, but in the different branch vault. The current sample add an integration with PostgreSQL and Vault server for managing credentials to database.

1. Running Vault

We will run Vault inside Docker container in a development mode. Server in development mode does not require any further setup, it is ready to use just after startup. It provides in-memory encrypted storage and unsecure (HTTP) connection, which is not a problem for a demo purposes. We can override default server IP address and listening port by setting environment property VAULT_DEV_LISTEN_ADDRESS, but we won’t do that. After startup our instance of Vault is available on port 8200. We can use admin web console, which is for me available under address http://192.168.99.100:8200. The current version of Vault is 1.0.0.

$ docker run --cap-add=IPC_LOCK -d --name vault -p 8200:8200 vault

It is possible to login using different methods, but the most suitable way for us is through a token. To do that we have to display container logs using command docker logs vault, and then copy Root Token as shown below.

vault-1

Now you can login to Vault web console.

vault-2

2. Integration with Postgres database

In Vault we can create Secret Engine that connects to other services and generates dynamic credentials on demand. Secrets engines are available under path. There is the dedicated engine for the various databases, for example PostgreSQL. Before activating such an engine we should run an instance of Postgres database. This time we will also use Docker container. It is possible to set login and password to the database using environment variables.

$ docker run -d --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=postgres123456 -e POSTGRES_USER=postgres postgres

After starting database, we may proceed to the engine configuration in Vault web console. First, let’s create our first secret engine. We may choose between some different types of engine. The right choice for now is Databases.

vault-3

You can apply a new configuration to Vault using vault command or by HTTP API. Vault web console provides terminal for running CLI commands, but it could be problematic in some cases. For example, I have a problem with escaping strings in some SQL commands, and therefore I had to add it using HTTP API. No matter which method you use, the next steps are the same. Following Vault documentation we first need to configure plugin for PostgreSQL database and then provide connection settings and credentials.

$ vault write database/config/postgres plugin_name=postgresql-database-plugin allowed_roles="default" connection_url="postgresql://{{username}}:{{password}}@192.168.99.100:5432?sslmode=disable" username="postgres" password="postgres123456"

Alternatively, you can perform the same action using HTTP API method. To authenticate against Vault we need to add header X-Vault-Token with root token. I have disabled SSL for connection with Postgres by setting sslmode=disable. There is only one role allowed to use this plugin: default. Now, let’s configure that role.

$ curl --header "X-Vault-Token: s.44GiacPqbV78fNbmoWK4mdYq" --request POST --data '{"plugin_name": "postgresql-database-plugin","allowed_roles": "default","connection_url": "postgresql://{{username}}:{{password}}@localhost:5432?sslmode=disable","username": "postgres","password": "postgres123456"}' http://192.168.99.100:8200/v1/database/config/postgres

The role can created either with CLI or with HTTP API. The name of role should the same as the name passed in field allowed_roles in the previous step. We also have to set target database name and SQL statement that creates user with privileges.

$ vault write database/roles/default db_name=postgres creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';GRANT SELECT, UPDATE, INSERT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";GRANT USAGE,  SELECT ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";" default_ttl="1h" max_ttl="24h"

Alternatively you can call the following HTTP API endpoint.

$ curl --header "X-Vault-Token: s.44GiacPqbV78fNbmoWK4mdYq" --request POST --data '{"db_name":"postgres", "creation_statements": ["CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';GRANT SELECT, UPDATE, INSERT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";"]}' http://192.168.99.100:8200/v1/database/roles/default

And it’s all. Now, we can test our configuration using command with role’s name vault read database/creds/default as shown below. You can login to database using returned credentials. By default, they are valid for one hour.

vault-5

3. Enabling Spring Cloud Vault

We have succesfully configured secret engine that is responsible for creating user on Postgres. Now, we can proceed to the development and integrate our application with Vault. Fortunately, there is a project Spring Cloud Vault, which provides out-of-the-box integration with Vault database secret engines. The only thing we have to do is to include Spring Cloud Vault to our project and provide some configuration settings. Let’s start from setting Spring Cloud Release Train. We use the newest stable version Finchley.SR2.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Finchley.SR2</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

We have to include two dependencies to our pom.xml. Starter spring-cloud-starter-vault-config is responsible for loading configuration from Vault and spring-cloud-vault-config-databases responsible for integration with secret engines for databases.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-vault-config</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-vault-config-databases</artifactId>
</dependency>

The sample application also connects to Postgres database, so we will include the following dependencies.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
	<groupId>org.postgresql</groupId>
	<artifactId>postgresql</artifactId>
	<version>42.2.5</version>
</dependency>

The only thing we have to do is to configure integration with Vault via Spring Cloud Vault. The following configuration settings should be placed in bootstrap.yml (no application.yml). Because we run our application on Nomad server, we use the port number dynamically set by Nomad available under environment property NOMAD_HOST_PORT_http and secret token from Vault available under environment property VAULT_TOKEN.

server:
  port: ${NOMAD_HOST_PORT_http:8091}

spring:
  application:
    name: callme-service
  cloud:
    vault:
      uri: http://192.168.99.100:8200
      token: ${VAULT_TOKEN}
      postgresql:
        enabled: true
        role: default
        backend: database
  datasource:
    url: jdbc:postgresql://192.168.99.100:5432/postgres

The important part of the configuration visible above is under the property spring.cloud.vault.postgresql. Following Spring Cloud documentation “Username and password are stored in spring.datasource.username and spring.datasource.password so using Spring Boot will pick up the generated credentials for your DataSource without further configuration”. Spring Cloud Vault is connecting with Vault, and then using role default (previously created on Vault) to generate new credentials to database. Those credentials are injected into spring.datasource properties. Then, the application is connecting to database using injected credentials. Everything works fine. Now, let’s try to run our applications on Nomad.

4. Deploying apps on Nomad

Before starting Nomad node we should also run Consul using its Docker container. Here’s Docker command that starts single node Consul instance.

$ docker run -d --name consul -p 8500:8500 consul

After that we can configure connection settings to Consul and Vault in Nomad configuration. I have create the file nomad.conf. Nomad is authenticating itself against Vault using root token. Connection with Consul is not secured. Sometimes it is also required to set network interface name and total CPU on the machine for Nomad client. Most clients are able to determine it automatically, but it does not work for me.

client {
  network_interface = "Połączenie lokalne 4"
  cpu_total_compute = 10400
}

consul {
  address = "192.168.99.100:8500"
}

vault {
  enabled = true
  address = "http://192.168.99.100:8200"
  token = "s.6jhQ1WdcYrxpZmpa0RNd0LMw"
}

Let’s run Nomad in development mode passing configuration file location.

$ nomad agent -dev -config=nomad.conf

If everything works fine you should see the similar log on startup.

vault-6

Once we have succesfully started Nomad agent integrated with Consul and Vault, we can proceed to the applications deployment. First build the whole project with mvn clean install command. The next step is to prepare Nomad’s job descriptor file. For more details about Nomad deployment process and its descriptor file you can refer to my previous article about it (mentioned in the preface of this article). Descriptor file is available inside application GitHub under path callme-service/job.nomad for callme-service, and caller-service/job.nomad for caller-service.

job "callme-service" {
	datacenters = ["dc1"]
	type = "service"
	group "callme" {
		count = 2
		task "api" {
			driver = "java"
			config {
				jar_path    = "C:\\Users\\minkowp\\git\\sample-nomad-java-services-idea\\callme-service\\target\\callme-service-1.0.0-SNAPSHOT.jar"
				jvm_options = ["-Xmx256m", "-Xms128m"]
			}
			resources {
				cpu    = 500 # MHz
				memory = 300 # MB
				network {
					port "http" {}
				}
			}
			service {
				name = "callme-service"
				port = "http"
			}
			vault {
				policies = ["nomad"]
			}
		}
		restart {
			attempts = 1
		}
	}
}

You will have to change value of jar_path property with your path of application binaries. Before applying this deployment to Nomad we will have to add some additional configuration on Vault. When adding integration with Vault we have to pass the name of policies used for checking permissions. I set the policy with name nomad, which now has to created in Vault. Our application requires a permission for reading paths /secret/* and /database/* as shown below.

vault-7

Finally, we can deploy our application callme-service on Nomad by executing the following command.

$ nomad job run job.nomad

The similar descriptor file is available for caller-service, so we can also deploy it. All the microservice has been registered in Consul as shown below.

vault-8

Here are the list of registered instances of caller-service. As you can see on the picture below it is available under port 25816.

vault-9

You can also take a look on Nomad jobs view.

vault-10

Microservices with Spring Cloud Alibaba

Some days ago Spring Cloud has announced a support for several Alibaba components used for building microservices-based architecture. The project is still under the incubation stage, but there is a plan for graduating it from incubation to officially join a Spring Cloud Release Train in 2019. The currently released version 0.0.2.RELEASE is compatible with Spring Boot 2, while older version 0.0.1.RELEASE is compatible with Spring Boot 1.x. This project seems to be very interesting, and currently it is the most popular repository amongst Spring Cloud Incubator repositories (around 1.5k likes on GitHub).
Currently, the most commonly used Spring Cloud project for building microservices architecture is Spring Cloud Netflix. As you probably know this project provides Netflix OSS integrations for Spring Boot apps, including service discovery (Eureka), circuit breaker (Hystrix), intelligent routing (Zuul) and client side load balancing (Ribbon). The first question that came to my mind when I was reading about Spring Cloud Alibaba was: ’Can Spring Cloud Alibaba be an alternative for Spring Cloud Netflix ?’. The answer is yes, but not entirely. Spring Cloud Alibaby still integrates with Ribbon, which is used for load balancing based on service discovery. Netflix Eureka server is replaced in that case by Nacos.
Nacos (Dynamic Naming and Configuration Service) is an easy-to-use platform designed for dynamic service discovery and configuration and service management. It helps you to build cloud native applications and microservices platform easily. Following that definition you can use Nacos for:

  • Service Discovery – you can register your microservice and discover other microservices via a DNS or HTTP interface. It also provides real-time healthchecks for registered services
  • Distributed Configuration – dynamic configuration service provided by Nacos allows you to manage configurations of all services in a centralized and dynamic manner across all environments. In fact, you can replace Spring Cloud Config Server using it
  • Dynamic DNS – it supports weighted routing, making it easier to implement mid-tier load balancing, flexible routing policies, flow control, and simple DNS resolution services

Spring Cloud supports another popular Alibaba component – Sentinel. Sentinel is responsible for flow control, concurrency, circuit breaking and load protection.

Our sample system consisting of three microservices and API gateway is very similar to the architecture described in my article Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud. The only difference is in tools used for configuration management and service discovery. Microservice organization-service calls some endpoints exposed by department-service, while department-service calls endpoints exposed by employee-service. An inter-service communication is realized using OpenFeign client. The complexity of the whole system is hidden behind an API gateway implemented using Netflix Zuul.

alibaba-9

1. Running Nacos server

You can run Nacos on both Windows and Linux systems. First, you should download latest stable release provided on the site https://github.com/alibaba/nacos/releases. After unzipping you have to run it in standalone mode by executing the following command.

cmd nacos/bin/startup.cmd -m standalone

By default, Nacos is starting on port 8848. It provides HTTP API under context /nacos/v1, and admin web console under address http://localhost:8848/nacos. If you take a look on the logs you will find out that it is just an application written using Spring Framework.

2. Dependencies

As I have mentioned before Spring Cloud Alibaba is still under incubation stage, therefore it is not included into Spring Cloud Release Train. That’s why we need to include special BOM for Alibaba inside dependency management section in pom.xml. We will also use the newest stable version of Spring Cloud, which is now Finchley.SR2.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Finchley.SR2</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-alibaba-dependencies</artifactId>
			<version>0.2.0.RELEASE</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

Spring Cloud Alibaba provides three starters for the currently supported components. These are spring-cloud-starter-alibaba-nacos-discovery for service discovery with Nacos, spring-cloud-starter-alibaba-nacos-config for distributed configuration Nacos, and spring-cloud-starter-alibaba-sentinel for Sentinel dependencies.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-alibaba-sentinel</artifactId>
</dependency>

3. Enabling distributed configuration with Nacos

To enable configuration management with Nacos we only need to include starter spring-cloud-starter-alibaba-nacos-config. It does not provide auto-configured address of Nacos server, so we need to explicitly set it for the application inside bootstrap.yml file.

spring:
  application:
    name: employee-service
  cloud:
    nacos:
      config:
        server-addr: localhost:8848

Our application tries to connect with Nacos and fetch configuration provided inside file with the same name as value of property spring.application.name. Currently, Spring Cloud Alibaba supports only .properties file, so we need to create configuration inside file employee-service.properties. Nacos comes with an elegant way of creating and managing configuration properties. We can use web admin console for that. The field Data ID visible on the picture below is in fact the name of our configuration file. The list of configuration properties should be placed inside Configuration Content field.

alibaba-1

The good news related with Spring Cloud Alibaba is that it dynamically refresh application configuration after modifications on Nacos. The only thing you have to do in your application is to annotate the beans that should be refreshed with @RefreshScope or @ConfigurationProperties. Now, let’s consider the following situation. We will modify our configuration a little to add some properties with test data as shown below.

alibaba-4

Here’s the implementation of our repository bean. It injects all configuration properties with prefix repository.employees into the list of employees.

@Repository
@ConfigurationProperties(prefix = "repository")
public class EmployeeRepository {

	private List<Employee> employees = new ArrayList<>();
	
	public List<Employee> getEmployees() {
		return employees;
	}

	public void setEmployees(List<Employee> employees) {
		this.employees = employees;
	}
	
	public Employee add(Employee employee) {
		employee.setId((long) (employees.size()+1));
		employees.add(employee);
		return employee;
	}
	
	public Employee findById(Long id) {
		Optional<Employee> employee = employees.stream().filter(a -> a.getId().equals(id)).findFirst();
		if (employee.isPresent())
			return employee.get();
		else
			return null;
	}
	
	public List<Employee> findAll() {
		return employees;
	}
	
	public List<Employee> findByDepartment(Long departmentId) {
		return employees.stream().filter(a -> a.getDepartmentId().equals(departmentId)).collect(Collectors.toList());
	}
	
	public List<Employee> findByOrganization(Long organizationId) {
		return employees.stream().filter(a -> a.getOrganizationId().equals(organizationId)).collect(Collectors.toList());
	}

}

Now, you can change some values of properties as shown on the picture below. Then, if you call employee-service, that is available on port 8090 (http://localhost:8090) you should see the full list of employees with modified values.

alibaba-3

The same configuration properties should be created for our two other microservices department-service and organization-service. Assuming you have already done it, your should have the following configuration entries on Nacos.

alibaba-5

4. Enabling service discovery with Nacos

To enable service discovery with Nacos you first need to include starter spring-cloud-starter-alibaba-nacos-discovery. The same as for the configuration server you also need to set address of Nacos server inside bootstrap.yml file.

spring:
  application:
    name: employee-service
  cloud:
    nacos:
      discovery:
        server-addr: localhost:8848

The last step is to enable discovery client for the application by annotating the main class with @EnableDiscoveryClient.

@SpringBootApplication
@EnableDiscoveryClient
@EnableSwagger2
public class EmployeeApplication {

	public static void main(String[] args) {
		SpringApplication.run(EmployeeApplication.class, args);
	}
	
}

If you provide the same implementation for all our microservices and run them you will see the following list of registered application in Nacos web console.

alibaba-7

5. Inter-service communication

Communication between microservices is realized using the standard Spring Cloud components: RestTemplate or OpenFeign client. By default, load balancing is realized by Ribbon client. The only difference in comparison to Spring Cloud Netflix is discovery server used as service registry in the communication process. Here’s the implementation of Feign client in department-service responsible for integration with endpoint GET /department/{departmentId} exposed by employee-service.

@FeignClient(name = "employee-service")
public interface EmployeeClient {

	@GetMapping("/department/{departmentId}")
	List<Employee> findByDepartment(@PathVariable("departmentId") Long departmentId);
	
}

Don’t forget to enable Feign clients for Spring Boot application.

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
@EnableSwagger2
public class DepartmentApplication {

	public static void main(String[] args) {
		SpringApplication.run(DepartmentApplication.class, args);
	}
	
}

We should also run multiple instances of employee-service in order to test load balancing on the client side. Before doing that we could enable dynamic generation of port number by setting property server.port to 0 inside configuration stored on Nacos. Now, we can run many instances of single service using the same configuration settings without risk of the port number conflict for a single microservice. Let’s scale up number of employee-service instances.

alibaba-8

If you would like to test an inter-service communication you can call the following methods that uses OpenFeign client for calling endpoints exposed by other microservices: GET /organization/{organizationId}/with-employees from department-service, and GET /{id}/with-departments, GET /{id}/with-departments-and-employees, GET /{id}/with-employees from organization-service.

6. Running API Gateway

Now it is a time to run the last component in our architecture – an API Gateway. It is built on top of Spring Cloud Netflix Zuul. It also uses Nacos a s a discovery and configuration server.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-zuul</artifactId>
</dependency>

After including required dependencies we need to enable Zuul proxy and discovery client for the application.

@SpringBootApplication
@EnableDiscoveryClient
@EnableZuulProxy
@EnableSwagger2
public class ProxyApplication {

	public static void main(String[] args) {
		SpringApplication.run(ProxyApplication.class, args);
	}
	
}

Here’s the configuration of Zuul routes defined for our three sample microservices.

zuul:
  routes:
    department:
      path: /department/**
      serviceId: department-service
    employee:
      path: /employee/**
      serviceId: employee-service
    organization:
      path: /organization/**
      serviceId: organization-service

After running gateway exposes Swagger2 specification for API exposed by all defined microservices. Assuming you have run it on port 8080, you can access it under address http://localhost:8080/swagger-ui.html. Thanks to that you can all the methods from one, single location.

spring-cloud-3

Conclusion

Sample applications source code is available on GitHub under repository sample-spring-microservices-new in branch alibaba: https://github.com/piomin/sample-spring-microservices-new/tree/alibaba. The main purpose of this article was to show you how to replace some popular Spring Cloud components with Alibaba Nacos used for service discovery and configuration management. Spring Cloud Alibaba project is at an early stage of development, so we could probably expect some new interesting features near the future. You can find some other examples on Spring Cloud Alibaba Github site here https://github.com/spring-cloud-incubator/spring-cloud-alibaba/tree/master/spring-cloud-alibaba-examples.

Reactive programming with Project Reactor

If you are building reactive microservices you would probably have to merge data streams from different source APIs into a single result stream. It inspired me to create this article containing some most common scenarios of using reactive streams in microservice-based architecture during inter-service communication. I have already described some aspects related to reactive programming with Spring based on Spring WebFlux and Spring Data JDBC projects in the following articles:

Spring Framework supports reactive programming since version 5. That support is build on top of Project Reactor – https://projectreactor.io. Reactor is a fourth-generation Reactive library for building non-blocking applications on the JVM based on the Reactive Streams Specification. Working with this library can be difficult at first, especially if you don’t have any experience with reactive streams. Reactive Core gives us two data types that enable us to produce a stream of data: Mono and Flux. With Flux we can emit 0..nelements, while with Mono we can create a stream of 0..1elements. Both those types implement Publisher interface. Both these types are lazy, which means they won’t be executed until you consume it. Therefore, when building reactive APIs it is important not to block the stream. Spring WebFlux doesn’t allow that.

Introduction

The sample project is available on GitHub in repository reactive-playground https://github.com/piomin/reactive-playground.git. It is written in Kotlin. In addition to some Kotlin libraries to only single dependency that needs to be added in order to use Project Reactor is reactor-core.

<dependency>
	<groupId>io.projectreactor</groupId>
	<artifactId>reactor-core</artifactId>
	<version>3.2.1.RELEASE</version>
</dependency>

I would not like to show you the features of Project Reactor basing on simple String objects like in many other articles. Therefore, I have created the following class hierarchy for our tests, that allows us to simulate APIs built for three different domain objects.

reactor-4

Class Organization contains a list of Employee and Department. Each department contains a list of Employee assigned only to the given department inside organization. Class Employee has properties: organizationId that assigns it to the organization and departmentId that assigns it to the department.

data class Employee(var id: Int, var name: String, var salary: Int) {
    var organizationId: Int? = null
    var departmentId: Int? = null

    constructor(id: Int, name: String, salary: Int, organizationId: Int, departmentId: Int) : this(id, name, salary) {
        this.organizationId = organizationId
        this.departmentId = departmentId
    }

    constructor(id: Int, name: String, salary: Int, organizationId: Int) : this(id, name, salary) {
        this.organizationId = organizationId
    }
}

Here’s the implementation of Department class.

class Department(var id: Int, var name: String, var organizationId: Int) {
    var employees: MutableList<Employee> = ArrayList()

    constructor(id: Int, name: String, organizationId: Int, employees: MutableList<Employee>) : this(id, name, organizationId) {
        this.employees.addAll(employees)
    }

    fun addEmployees(employees: MutableList<Employee>) : Department {
        this.employees.addAll(employees)
        return this
    }

    fun addEmployee(employee: Employee) : Department {
        this.employees.add(employee)
        return this
    }

}

Here’s the implementation of Organization class.

class Organization(var id: Int, var name: String) {
    var employees: MutableList<Employee> = ArrayList()
    var departments: MutableList<Department> = ArrayList()

    constructor(id: Int, name: String, employees: MutableList<Employee>, departments: MutableList<Department>) : this(id, name){
        this.employees.addAll(employees)
        this.departments.addAll(departments)
    }

    constructor(id: Int, name: String, employees: MutableList<Employee>) : this(id, name){
        this.employees.addAll(employees)
    }
}

Scenario 1

We have to API methods that return data streams. First of them return Flux emitting employees assigned to the given organization. Second of them just returns Mono with the current organization.

private fun getOrganizationByName(name: String) : Mono<Organization> {
	return Mono.just(Organization(1, name))
}

private fun getEmployeesByOrganization(id: Int) : Flux<Employee> {
	return Flux.just(Employee(1, "Employee1", 1000, id),
					 Employee(2, "Employee2", 2000, id))
}

We would like to return the single stream emitting organization that contains list of employees as shown below.

reactor-scenario-1

Here’s the solution. We use zipWhen method that waits for result from source Mono, and then calls the second Mono. Because we can zip only the same stream types (in that case these are Mono) we need to convert Flux<Employee> returned by getEmployeesByOrganization method into Mono<MutableList<Employee>> using collectList function. Thanks to zipWhen we can then combine two Mono streams and create new object inside map function.

@Test
fun testScenario1() {
	val organization : Mono<Organization> = getOrganizationByName("test")
		.zipWhen { organization ->
			getEmployeesByOrganization(organization.id!!).collectList()
		}
		.map { tuple -> 
			Organization(tuple.t1.id, tuple.t1.name, tuple.t2)
		}
}

Scenario 2

Let’s consider another scenario. Now, we have to Flux streams that emits employees and departments. Every employee has property departmentId responsible for assignment to the department.

private fun getDepartments() : Flux<Department> {
    return Flux.just(Department(1, "X", 1),
                     Department(2, "Y", 1))
}

private fun getEmployees() : Flux<Employee> {
    return Flux.just(Employee(1, "Employee1", 1000, 1, 1),
            Employee(2, "Employee2", 2000, 1, 1),
            Employee(3, "Employee3", 1000, 1, 2),
            Employee(4, "Employee4", 2000, 1, 2))
}

The goal is to merge those two streams and return the single Flux stream emitting departments that contains all employees assigned to the given department. Here’s the picture that illustrates the transformation described above.

reactor-5

We can do that in two ways as shown below. First calls flatMap function on stream with departments. Inside flatMap we zip every single Department with stream of employees. That stream is then filtered by departmentId and converted into Mono type. Finally, we are creating Mono type using map function that emits department containing list of employees.
The second way groups Flux with employees by departmentId. Then it invokes zipping and mapping functions similar to the previous way.

@Test
fun testScenario2() {
	val departments: Flux<Department> = getDepartments()
		.flatMap { department ->
			Mono.just(department)
				.zipWith(getEmployees().filter { it.departmentId == department.id }.collectList())
				.map { t -> t.t1.addEmployees(t.t2) }
		}

	val departments2: Flux<Department> = getEmployees()
		.groupBy { it.departmentId }
		.flatMap { t -> getDepartments().filter { it.id == t.key() }.elementAt(0)
			.zipWith(t.collectList())
			.map { it.t1.addEmployees(it.t2) }
		}
}

Scenario 3

This scenario is simpler than two previous scenarios. We have two API methods that emits Flux with the same object types. First of them contains list of employees having id, name, salary properties, while the second id, organizationId, departmentId properties.

private fun getEmployeesBasic() : Flux<Employee> {
	return Flux.just(Employee(1, "AA", 1000),
		                  Employee(2, "BB", 2000))
}

private fun getEmployeesRelationships() : Flux<Employee> {
	return Flux.just(Employee(1, 1, 1),
			     Employee(2, 1, 2))
}

We want to convert it into a single stream emitting employees with full set of properties. The following picture illustrates the described transformation.

reactor-scenario-3

In that case the solution is pretty simple. We are zipping two Flux stream using zipWith function, and then map two zipped object into a single containing the full set of properties.

@Test
fun testScenario3() {
	val employees : Flux<Employee> = getEmployeesBasic()
		.zipWith(getEmployeesRelationships())
		.map { t -> Employee(t.t1.id, t.t1.name, t.t1.salary, t.t2.organizationId!!, t.t2.departmentId!!) }
}

Scenario 4

In this scenario we have two independent Flux stream that emit the same type of objects – Employee.

private fun getEmployeesFirstPart() : Flux<Employee> {
	return Flux.just(Employee(1, "AA", 1000), Employee(3, "BB", 3000))
}

private fun getEmployeesSecondPart() : Flux<Employee> {
	return Flux.just(Employee(2, "CC", 2000), Employee(4, "DD", 4000))
}

We would like to merge those two stream into a single stream ordered by id. The following picture shows that transformation.

reactor-scenario-4

Here’s the solution. We use mergeOrderedWith function with comparator that compares id. Then we can perform some transformations on every object, but it is only an option that shows the usage on map function.

@Test
fun testScenario4() {
	val persons: Flux<Employee> = getEmployeesFirstPart()
		.mergeOrderedWith(getEmployeesSecondPart(), Comparator { o1, o2 -> o1.id.compareTo(o2.id) })
		.map {
			Employee(it.id, it.name, it.salary, 1, 1)
		}
}

Scenario 5

And the last scenario in this article. We have a single input stream Mono<Organization> that contains list of departments. Each of department inside that list also contains the list of all employees assigned to the given department. Here’s our API method implementation.

private fun getDepartmentsByOrganization(id: Int) : Flux<Department> {
	val dep1 = Department(1, "A", id, mutableListOf(
			Employee(1, "Employee1", 1000, id, 1),
			Employee(2, "Employee2", 2000, id, 1)
		)
	)
	val dep2 = Department(2, "B", id, mutableListOf(
			Employee(3, "Employee3", 1000, id, 2),
			Employee(4, "Employee4", 2000, id, 2)
		)
	)
	return Flux.just(dep1, dep2)
}

The goal is to convert the stream to the same stream Flux<Department>, but containing list of all employees in department. The following picture visualize described transformation.

reactor-scenario-5

Here’s the solution. We invoke flatMapIterable function that converts Flux<Department> into Flux<Employees> by returning List<Employee>. Then we convert it to Mono and add to newly created Organization object inside map function.

@Test
fun testScenario5() {
	var organization: Mono<Organization> = getDepartmentsByOrganization(1)
		.flatMapIterable { department -> department.employees }
		.collectList()
		.map { t -> Organization(1, "X", t) }
}

Introduction to Reactive APIs with Postgres, R2DBC, Spring Data JDBC and Spring WebFlux

There are pretty many technologies listed in the title of this article. Spring WebFlux has been introduced with Spring 5 and Spring Boot 2 as a project for building reactive-stack web applications. I have already described how to use it together with Spring Boot and Spring Cloud for building reactive microservices in that article: Reactive Microservices with Spring WebFlux and Spring Cloud. Spring 5 has also introduced some projects supporting reactive access to NoSQL databases like Cassandra, MongoDB or Couchbase. But there were still a lack in support for reactive to access to relational databases. The change is coming together with R2DBC (Reactive Relational Database Connectivity) project. That project is also being developed by Pivotal members. It seems to be very interesting initiative, however it is rather at the beginning of the road. Anyway, there is a module for integration with Postgres, and we will use it for our demo application. R2DBC will not be the only one new interesting solution described in this article. I also show you how to use Spring Data JDBC – another really interesting project released recently.
It is worth mentioning some words about Spring Data JDBC. This project has been already released, and is available under version 1.0. It is a part of bigger Spring Data framework. It offers a repository abstraction based on JDBC. The main reason of creating that library is allow to access relational databases using Spring Data way (through CrudRepository interfaces) without including JPA library to the application dependencies. Of course, JPA is still certainly the main persistence API used for Java applications. Spring Data JDBC aims to be much simpler conceptually than JPA by not implementing popular patterns like lazy loading, caching, dirty context, sessions. It also provides only very limited support for annotation-based mapping. Finally, it provides an implementation of reactive repositories that uses R2DBC for accessing relational database. Although that module is still under development (only SNAPSHOT version is available), we will try to use it in our demo application. Let’s proceed to the implementation.

Including dependencies

We use Kotlin for implementation. So first, we include some required Kotlin dependencies.

<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-stdlib</artifactId>
	<version>${kotlin.version}</version>
</dependency>
<dependency>
	<groupId>com.fasterxml.jackson.module</groupId>
	<artifactId>jackson-module-kotlin</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-reflect</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-test-junit</artifactId>
	<version>${kotlin.version}</version>
	<scope>test</scope>
</dependency>

We should also add kotlin-maven-plugin with support for Spring.

<plugin>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-maven-plugin</artifactId>
	<version>${kotlin.version}</version>
	<executions>
		<execution>
			<id>compile</id>
			<phase>compile</phase>
			<goals>
				<goal>compile</goal>
			</goals>
		</execution>
		<execution>
			<id>test-compile</id>
			<phase>test-compile</phase>
			<goals>
				<goal>test-compile</goal>
			</goals>
		</execution>
	</executions>
	<configuration>
		<args>
			<arg>-Xjsr305=strict</arg>
		</args>
		<compilerPlugins>
			<plugin>spring</plugin>
		</compilerPlugins>
	</configuration>
</plugin>

Then, we may proceed to including frameworks required for the demo implementation. We need to include the special SNAPSHOT version of Spring Data JDBC dedicated for accessing database using R2DBC. We also have to add some R2DBC libraries and Spring WebFlux. As you may see below only Spring WebFlux is available in stable version (as a part of Spring Boot RELEASE).

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.data</groupId>
	<artifactId>spring-data-jdbc</artifactId>
	<version>1.0.0.r2dbc-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>io.r2dbc</groupId>
	<artifactId>r2dbc-spi</artifactId>
	<version>1.0.0.M5</version>
</dependency>
<dependency>
	<groupId>io.r2dbc</groupId>
	<artifactId>r2dbc-postgresql</artifactId>
	<version>1.0.0.M5</version>
</dependency>

It is also important to set dependency management for Spring Data project.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.data</groupId>
			<artifactId>spring-data-releasetrain</artifactId>
			<version>Lovelace-RELEASE</version>
			<scope>import</scope>
			<type>pom</type>
		</dependency>
	</dependencies>
</dependencyManagement>

Repositories

We are using well known Spring Data style of CRUD repository implementation. In that case we need to create interface that extends ReactiveCrudRepository interface.
Here’s the implementation of repository for managing Employee objects.

interface EmployeeRepository : ReactiveCrudRepository<Employee, Int< {
    @Query("select id, name, salary, organization_id from employee e where e.organization_id = $1")
    fun findByOrganizationId(organizationId: Int) : Flux<Employee>
}

Here’s the another implementation of repository – this time for managing Organization objects.

interface OrganizationRepository : ReactiveCrudRepository<Organization, Int< {
}

Implementing Entities and DTOs

Kotlin provides a convenient way of creating entity class by declaring it as data class. When using Spring Data JDBC we have to set primary key for entity by annotating the field with @Id. It assumes the key is automatically incremented by database. If you are not using auto-increment columns, you have to use a BeforeSaveEvent listener, which sets the ID of the entity. However, I tried to set such a listener for my entity, but it just didn’t work with reactive version of Spring Data JDBC.
Here’s an implementation of Employee entity class. What is worth mentioning Spring Data JDBC will automatically map class field organizationId into database column organization_id.

data class Employee(val name: String, val salary: Int, val organizationId: Int) {
    @Id 
    var id: Int? = null
}

Here’s an implementation of Organization entity class.

data class Organization(var name: String) {
    @Id 
    var id: Int? = null
}

R2DBC does not support any lists or sets. Because I’d like to return list with employees inside Organization object in one of API endpoints I have created DTO containing such a list as shown below.

data class OrganizationDTO(var id: Int?, var name: String) {
    var employees : MutableList = ArrayList()
    constructor(employees: MutableList) : this(null, "") {
        this.employees = employees
    }
}

The SQL scripts corresponding to the created entities are visible below. Field type serial will automatically creates sequence and attach it to the field id.

CREATE TABLE employee (
    name character varying NOT NULL,
    salary integer NOT NULL,
    id serial PRIMARY KEY,
    organization_id integer
);
CREATE TABLE organization (
    name character varying NOT NULL,
    id serial PRIMARY KEY
);

Building sample web applications

For the demo purposes we will build two independent applications employee-service and organization-service. Application organization-service is communicating with employee-service using WebFlux WebClient. It gets the list of employees assigned to the organization, and includes them to response together with Organization object. Sample applications source code is available on GitHub under repository sample-spring-data-webflux: https://github.com/piomin/sample-spring-data-webflux.
Ok, let’s begin from declaring Spring Boot main class. We need to enable Spring Data JDBC repositories by annotating the main class with @EnableJdbcRepositories.

@SpringBootApplication
@EnableJdbcRepositories
class EmployeeApplication

fun main(args: Array<String>) {
    runApplication<EmployeeApplication>(*args)
}

Working with R2DBC and Postgres requires some configuration. Probably due to an early stage of progress in development of Spring Data JDBC and R2DBC there is no Spring Boot auto-configuration for Postgres. We need to declare connection factory, client, and repository inside @Configuration bean.

@Configuration
class EmployeeConfiguration {

    @Bean
    fun repository(factory: R2dbcRepositoryFactory): EmployeeRepository {
        return factory.getRepository(EmployeeRepository::class.java)
    }

    @Bean
    fun factory(client: DatabaseClient): R2dbcRepositoryFactory {
        val context = RelationalMappingContext()
        context.afterPropertiesSet()
        return R2dbcRepositoryFactory(client, context)
    }

    @Bean
    fun databaseClient(factory: ConnectionFactory): DatabaseClient {
        return DatabaseClient.builder().connectionFactory(factory).build()
    }

    @Bean
    fun connectionFactory(): PostgresqlConnectionFactory {
        val config = PostgresqlConnectionConfiguration.builder() //
                .host("192.168.99.100") //
                .port(5432) //
                .database("reactive") //
                .username("reactive") //
                .password("reactive123") //
                .build()

        return PostgresqlConnectionFactory(config)
    }

}

Finally, we can create REST controllers that contain the definition of our reactive API methods. With Kotlin it does not take much space. The following controller definition contains three GET methods that allows to find all employees, all employees assigned to a given organization or a single employee by id.

@RestController
@RequestMapping("/employees")
class EmployeeController {

    @Autowired
    lateinit var repository : EmployeeRepository

    @GetMapping
    fun findAll() : Flux<Employee> = repository.findAll()

    @GetMapping("/{id}")
    fun findById(@PathVariable id : Int) : Mono<Employee> = repository.findById(id)

    @GetMapping("/organization/{organizationId}")
    fun findByorganizationId(@PathVariable organizationId : Int) : Flux<Employee> = repository.findByOrganizationId(organizationId)

    @PostMapping
    fun add(@RequestBody employee: Employee) : Mono<Employee> = repository.save(employee)

}

Inter-service Communication

For the OrganizationController the implementation is a little bit more complicated. Because organization-service is communicating with employee-service, we first need to declare reactive WebFlux WebClient builder.

@Bean
fun clientBuilder() : WebClient.Builder {
	return WebClient.builder()
}

Then, similar to the repository bean the builder is being injected into the controller. It is used inside findByIdWithEmployees method for calling method GET /employees/organization/{organizationId} exposed by employee-service. As you can see on the code fragment below it provides reactive API and return Flux object containing list of found employees. This list is injected into OrganizationDTO object using zipWith Reactor method.

@RestController
@RequestMapping("/organizations")
class OrganizationController {

    @Autowired
    lateinit var repository : OrganizationRepository
    @Autowired
    lateinit var clientBuilder : WebClient.Builder

    @GetMapping
    fun findAll() : Flux<Organization> = repository.findAll()

    @GetMapping("/{id}")
    fun findById(@PathVariable id : Int) : Mono<Organization> = repository.findById(id)

    @GetMapping("/{id}/withEmployees")
    fun findByIdWithEmployees(@PathVariable id : Int) : Mono<OrganizationDTO> {
        val employees : Flux<Employee> = clientBuilder.build().get().uri("http://localhost:8090/employees/organization/$id")
                .retrieve().bodyToFlux(Employee::class.java)
        val org : Mono = repository.findById(id)
        return org.zipWith(employees.collectList())
                .map { tuple -> OrganizationDTO(tuple.t1.id as Int, tuple.t1.name, tuple.t2) }
    }

    @PostMapping
    fun add(@RequestBody employee: Organization) : Mono<Organization> = repository.save(employee)

}

How it works?

Before running the tests we need to start Postgres database. Here’s the Docker command used for running Postgres container. It is creating user with password, and setting up default database.

$ docker run -d --name postgres -p 5432:5432 -e POSTGRES_USER=reactive -e POSTGRES_PASSWORD=reactive123 -e POSTGRES_DB=reactive postgres

Then we need to create some tests tables, so you have to run SQL script placed in the section Implementing Entities and DTOs. After that you can start our test applications. If you do not override default settings provided inside application.yml files employee-service is listening on port 8090, and organization-service on port 8095. The following picture illustrates the architecture of our sample system.
spring-data-1
Now, let’s add some test data using reactive API exposed by the applications.

$ curl -d '{"name":"Test1"}' -H "Content-Type: application/json" -X POST http://localhost:8095/organizations
$ curl -d '{"name":"Name1", "balance":5000, "organizationId":1}' -H "Content-Type: application/json" -X POST http://localhost:8090/employees
$ curl -d '{"name":"Name2", "balance":10000, "organizationId":1}' -H "Content-Type: application/json" -X POST http://localhost:8090/employees

Finally you can call GET organizations/{id}/withEmployees method, for example using your web browser. The result should be similar to the result visible on the following picture.

spring-data-2