Performance Comparison Between Spring Boot and Micronaut

Today we will compare two frameworks used for building microservices on the JVM: Spring Boot and Micronaut. First of them, Spring Boot is currently the most popular and opinionated framework in the JVM world. On the other side of the barrier is staying Micronaut, quickly gaining popularity framework especially designed for building serverless functions or low memory-footprint microservices. We will be comparing version 2.1.4 of Spring Boot with 1.0.0.RC1 of Micronaut. The comparison criteria are:

  • memory usage (heap and non-heap)
  • the size in MB of generated fat JAR file
  • the application startup time
  • the performance of application, in the meaning of average response time from the REST endpoint during sample load testing

Continue reading “Performance Comparison Between Spring Boot and Micronaut”

Advertisements

The Future of Spring Cloud Microservices After Netflix Era

If somebody would ask you about Spring Cloud, the first thing that comes into your mind will probably be Netflix OSS support. Support for such tools like Eureka, Zuul or Ribbon is provided not only by Spring, but also by some other popular frameworks used for building microservices architecture like Apache Camel, Vert.x or Micronaut. Currently, Spring Cloud Netflix is the most popular project being a part of Spring Cloud. It has around 3.2k stars on GitHub, while the second best has around 1.4k. Therefore, it is quite surprising that Pivotal has announced that most of Spring Cloud Netflix modules are entering maintenance mode. You can read more about in the post published on the Spring blog by Spencer Gibb https://spring.io/blog/2018/12/12/spring-cloud-greenwich-rc1-available-now. Continue reading “The Future of Spring Cloud Microservices After Netflix Era”

Redis in Microservices Architecture

Redis can be widely used in microservices architecture. It is probably one of the few popular software solutions that may be leveraged by your application in such many different ways. Depending on the requirements it can acts as a primary database, cache, message broker. While it is also a key/value store we can use it as a configuration server or discovery server in your microservices architecture. Although it is usually defined as an in-memory data structure, we can also run it in persistent mode.
Today, I’m going to show you some examples of using Redis with microservices built on top of Spring Boot and Spring Cloud frameworks. These application will communicate between each other asynchronously using Redis Pub/Sub, using Redis as a cache or primary database, and finally used Redis as a configuration server. Continue reading “Redis in Microservices Architecture”

Kotlin Microservices with Micronaut, Spring Cloud and JPA

Micronaut Framework provides support for Kotlin built upon Kapt compiler plugin. It also implements the most popular cloud-native patterns like distributed configuration, service discovery and client-side load balancing. These features allows to include your application built on top of Micronaut into the existing microservices-based system. The most popular example of such approach may be an integration with Spring Cloud ecosystem. If you have already used Spring Cloud, it is very likely you built your microservices-based architecture using Eureka discovery server and Spring Cloud Config as a configuration server. Beginning from version 1.1 Micronaut supports both these popular tools being a part of Spring Cloud project. That’s a good news, because in version 1.0 the only supported distributed solution was Consul, and there were no possibility to use Eureka discovery together with Consul property source (running them together ends with exception). Continue reading “Kotlin Microservices with Micronaut, Spring Cloud and JPA”

Quick Guide to Microservices with Micronaut Framework

Micronaut framework has been introduced as an alternative to Spring Boot for building microservice applications. At first glance it is very similar to Spring. It also implements such patterns like dependency injection and inversion of control based on annotations, however it uses JSR-330 (java.inject) for doing it. It has been designed specially in order to building serverless functions, Android applications, and low memory-footprint microservices. This means that it should faster startup time, lower memory usage or easier unit testing than competitive frameworks. However, today I don’t want to focus on those characteristics of Micronaut. I’m going to show you how to build simple microservices-based system using this framework. You can easily compare it with Spring Boot and Spring Cloud by reading my previous article about the same subject Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud. Does Micronaut have a change to gain the same popularity as Spring Boot? Let’s find out.

Our sample system consists of three independent microservices that communicate with each other. All of them integrate with Consul in order to fetch shared configuration. After startup every single service will register itself in Consul. Applications organization-service and department-service call endpoints exposed by other microservices using Micronaut declarative HTTP client. The traces from communication are sending to Zipkin. The source code of sample applications is available on GitHub in repository sample-micronaut-microservices.

micronaut-arch (1).png

Step 1. Creating application

We need to start by including some dependencies to our Maven pom.xml. First let’s define BOM with the newest stable Micronaut version.

<properties>
	<exec.mainClass>pl.piomin.services.employee.EmployeeApplication</exec.mainClass>
	<micronaut.version>1.0.3</micronaut.version>
	<jdk.version>1.8</jdk.version>
</properties>
<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>io.micronaut</groupId>
			<artifactId>micronaut-bom</artifactId>
			<version>${micronaut.version}</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

The list of required dependencies isn’t very long. Also not all of them are required, but they will be useful in our demo. For example micronaut-management need to be included in case we would like to expose some built-in management and monitoring endpoints.

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-http-server-netty</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-inject</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-runtime</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-management</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-inject-java</artifactId>
	<scope>provided</scope>
</dependency>

To build application uber-jar we need configure plugin responsible for packaging JAR file with dependencies. It can be for example maven-shade-plugin. When building new application it is also worth to expose basic information about it under /info endpoint. As I have already mentioned Micronaut adds support for monitoring your app via HTTP endpoints after including artifact micronaut-management. Management endpoint are integrated with Micronaut security module, what means that you need to authenticate yourself to be able to access them. To simplify we can disable authentication for /info endpoint.

endpoints:
  info:
    enabled: true
    sensitive: false

We can customize /info endpoint by adding some supported info sources. This mechanism is very similar to Spring Boot Actuator approach. If git.properties file is available on the classpath, all the values inside file will be exposed by /info endpoint. The same situation applies to build-info.properties file, that needs to be placed inside META-INF directory. However, in comparison with Spring Boot we need to provide more configuration in pom.xml to generate and package those to application JAR. The following Maven plugins are responsible for generating required properties files.

<plugin>
	<groupId>pl.project13.maven</groupId>
	<artifactId>git-commit-id-plugin</artifactId>
	<version>2.2.6</version>
	<executions>
		<execution>
			<id>get-the-git-infos</id>
			<goals>
				<goal>revision</goal>
			</goals>
		</execution>
	</executions>
	<configuration>
		<verbose>true</verbose>
		<dotGitDirectory>${project.basedir}/.git</dotGitDirectory>
		<dateFormat>MM-dd-yyyy '@' HH:mm:ss Z</dateFormat>
		<generateGitPropertiesFile>true</generateGitPropertiesFile>
		<generateGitPropertiesFilename>src/main/resources/git.properties</generateGitPropertiesFilename>
		<failOnNoGitDirectory>true</failOnNoGitDirectory>
	</configuration>
</plugin>
<plugin>
	<groupId>com.rodiontsev.maven.plugins</groupId>
	<artifactId>build-info-maven-plugin</artifactId>
	<version>1.2</version>
	<configuration>
		<filename>classes/META-INF/build-info.properties</filename>
		<projectProperties>
			<projectProperty>project.groupId</projectProperty>
			<projectProperty>project.artifactId</projectProperty>
			<projectProperty>project.version</projectProperty>
		</projectProperties>
	</configuration>
	<executions>
		<execution>
			<phase>prepare-package</phase>
			<goals>
				<goal>extract</goal>
			</goals>
		</execution>
	</executions>
</plugin>
</plugins>

Now, our /info endpoint is able to print the most important information about our app including Maven artifact name, version, and last Git commit id.

micronaut-2

Step 2. Exposing HTTP endpoints

Micronaut provides their own annotations for pointing out HTTP endpoints and methods. As I have mentioned in the preface it also uses JSR-330 (java.inject) for dependency injection. Our controller class should be annotated with @Controller. We also have annotations for every HTTP method type. The path parameter is automatically mapped to the class method parameter by its name, what is a nice simplification in comparison to Spring MVC where we need to use @PathVariable annotation. The repository bean used for CRUD operations is injected into controller using @Inject annotation.

@Controller("/employees")
public class EmployeeController {

    private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);

    @Inject
    EmployeeRepository repository;

    @Post
    public Employee add(@Body Employee employee) {
        LOGGER.info("Employee add: {}", employee);
        return repository.add(employee);
    }

    @Get("/{id}")
    public Employee findById(Long id) {
        LOGGER.info("Employee find: id={}", id);
        return repository.findById(id);
    }

    @Get
    public List<Employee> findAll() {
        LOGGER.info("Employees find");
        return repository.findAll();
    }

    @Get("/department/{departmentId}")
    @ContinueSpan
    public List<Employee> findByDepartment(@SpanTag("departmentId") Long departmentId) {
        LOGGER.info("Employees find: departmentId={}", departmentId);
        return repository.findByDepartment(departmentId);
    }

    @Get("/organization/{organizationId}")
    @ContinueSpan
    public List<Employee> findByOrganization(@SpanTag("organizationId") Long organizationId) {
        LOGGER.info("Employees find: organizationId={}", organizationId);
        return repository.findByOrganization(organizationId);
    }

}

Our repository bean is pretty simple. It just provides in-memory store for Employee instances. We will mark it with @Singleton annotation.

@Singleton
public class EmployeeRepository {

	private List<Employee> employees = new ArrayList<>();
	
	public Employee add(Employee employee) {
		employee.setId((long) (employees.size()+1));
		employees.add(employee);
		return employee;
	}
	
	public Employee findById(Long id) {
		Optional<Employee> employee = employees.stream().filter(a -> a.getId().equals(id)).findFirst();
		if (employee.isPresent())
			return employee.get();
		else
			return null;
	}
	
	public List<Employee> findAll() {
		return employees;
	}
	
	public List<Employee> findByDepartment(Long departmentId) {
		return employees.stream().filter(a -> a.getDepartmentId().equals(departmentId)).collect(Collectors.toList());
	}
	
	public List<Employee> findByOrganization(Long organizationId) {
		return employees.stream().filter(a -> a.getOrganizationId().equals(organizationId)).collect(Collectors.toList());
	}
	
}

Micronaut is able to automatically generate Swagger YAML definition from our controller and methods basing on annotations. To achieve this, we first need to include the following dependency to our pom.xml.

<dependency>
	<groupId>io.swagger.core.v3</groupId>
	<artifactId>swagger-annotations</artifactId>
</dependency>

Then we should annotate application main class with @OpenAPIDefinition and provide some basic information like title or version number. Here’s employee application main class.

@OpenAPIDefinition(
    info = @Info(
        title = "Employees Management",
        version = "1.0",
        description = "Employee API",
        contact = @Contact(url = "https://piotrminkowski.wordpress.com", name = "Piotr Mińkowski", email = "piotr.minkowski@gmail.com")
    )
)
public class EmployeeApplication {

    public static void main(String[] args) {
        Micronaut.run(EmployeeApplication.class);
    }

}

Micronaut generates Swagger file basing on title and version fields inside @Info annotation. In that case our YAML definition file is available under name employees-management-1.0.yml, and will be generated to the META-INF/swagger directory. We can expose it outside application using HTTP endpoint. Here’s the appropriate configuration provided inside application.yml file.

micronaut:
  router:
    static-resources:
      swagger:
        paths: classpath:META-INF/swagger
        mapping: /swagger/**

Now, our file is available under path http://localhost:8080/swagger/employees-management-1.0.yml if run it on default 8080 port (we won’t do that, what I’m going to describe in the next part of this article). In comparison to Spring Boot, we don’t have such project like Swagger SpringFox for Micronaut, so we need to copy the content to online editor in order to see the graphical representation of Swagger YAML. Here’s it.

micronaut-1.PNG

Ok, since we have finished implementation of single microservice we may proceed to cloud-native features provided by Micronaut.

Step 3. Distributed configuration

Micronaut comes with built in APIs for doing distributed configuration. In fact, the only one available solution for now is distributed configuration based on HashiCorp’s Consul. Micronaut features for externalizing and adapting configuration to the environment are very similar to the Spring Boot approach. We also have application.yml and bootstrap.yml files, which can be used for application environment configuration. When using distributed configuration we first need to provide bootstrap.yml file on the classpath. It should contains an address of remote configuration server and preferred configuration store format. Of course, we first need to enable distributed configuration client by setting property micronaut.config-client.enabled to true. Here’s bootstrap.yml file for department-service.

micronaut:
  application:
    name: department-service
  config-client:
    enabled: true
consul:
  client:
    defaultZone: "192.168.99.100:8500"
    config:
      format: YAML

We can choose between properties, JSON, YAML and FILES (git2consul) configuration formats. I decided to use YAML. To apply this configuration on Consul we first need to start it locally in development mode. Because I’m using Docker Toolbox the default address of Consul is 192.168.99.100. The following Docker command will start single-node Consul instance and expose it on port 8500.

$ docker run -d --name consul -p 8500:8500 consul

Now, you can navigate to the tab Key/Value in Consul web console and create new file in YAML format /config/application.yml as shown below. Besides configuration for Swagger and /info management endpoint it also enables dynamic HTTP generation on startup by setting property micronaut.server.port to -1. Because, the name of file is application.yml it is by default shared between all microservices that uses Consul config client.

micronaut-2

Step 4. Service discovery

Micronaut gives you more options when configuring service discovery, than for distributed configuration. You can use Eureka, Consul, Kubernetes or just manually configure list of available services. However, I have observed that using Eureka discovery client together with Consul config client causes some errors on startup. In this example we will use Consul discovery. Because Consul address has been already provided in bootstrap.yml for every microservice, we just need to enable service discovery by adding the following lines to application.yml stored in Consul KV.

consul:
  client:
    registration:
      enabled: true

We should also include the following dependency to Maven pom.xml of every single application.

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-discovery-client</artifactId>
</dependency>

Finally, you can just run every microservice (you may run more than one instance locally, since HTTP port is generated dynamically). Here’s my list of running services registered in Consul.

micronaut-3

I have run two instances of employee-service as shown below.

micronaut-4

Step 5. Inter-service communication

Micronaut uses build-in HTTP client for load balancing between multiple instances of single microservice. By default it leverages Round Robin algorithm. We may choose between low-level HTTP client and declarative HTTP client with @Client. Micronaut declarative HTTP client concept is very similar to Spring Cloud OpenFeign. To use built-in client we first need to include the following dependency to project pom.xml.

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-http-client</artifactId>
</dependency>

Declarative client automatically integrates with a discovery client. It tries to find the service registered in Consul under the same name as value provided inside id field.

@Client(id = "employee-service", path = "/employees")
public interface EmployeeClient {

	@Get("/department/{departmentId}")
	List<Employee> findByDepartment(Long departmentId);
	
}

Now, the client bean needs to be injected into the controller.

@Controller("/departments")
public class DepartmentController {

	private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentController.class);
	
	@Inject
	DepartmentRepository repository;
	@Inject
	EmployeeClient employeeClient;
	
	@Post
	public Department add(@Body Department department) {
		LOGGER.info("Department add: {}", department);
		return repository.add(department);
	}
	
	@Get("/{id}")
	public Department findById(Long id) {
		LOGGER.info("Department find: id={}", id);
		return repository.findById(id);
	}
	
	@Get
	public List<Department> findAll() {
		LOGGER.info("Department find");
		return repository.findAll();
	}
	
	@Get("/organization/{organizationId}")
	@ContinueSpan
	public List<Department> findByOrganization(@SpanTag("organizationId") Long organizationId) {
		LOGGER.info("Department find: organizationId={}", organizationId);
		return repository.findByOrganization(organizationId);
	}
	
	@Get("/organization/{organizationId}/with-employees")
	@ContinueSpan
	public List<Department> findByOrganizationWithEmployees(@SpanTag("organizationId") Long organizationId) {
		LOGGER.info("Department find: organizationId={}", organizationId);
		List<Department> departments = repository.findByOrganization(organizationId);
		departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
		return departments;
	}
	
}

Step 6. Distributed tracing

Micronaut application can be easily integrated with Zipkin to send there traces with HTTP traffic automatically. To enable this feature we first need to include the following dependencies to pom.xml.

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-tracing</artifactId>
</dependency>
<dependency>
	<groupId>io.zipkin.brave</groupId>
	<artifactId>brave-instrumentation-http</artifactId>
	<scope>runtime</scope>
</dependency>
<dependency>
	<groupId>io.zipkin.reporter2</groupId>
	<artifactId>zipkin-reporter</artifactId>
	<scope>runtime</scope>
</dependency>
<dependency>
	<groupId>io.opentracing.brave</groupId>
	<artifactId>brave-opentracing</artifactId>
</dependency>

Then, we have to provide some configuration settings inside application.yml including Zipkin URL and sampler options. By setting property tracing.zipkin.sampler.probability to 1 we are forcing micronaut to send traces for every single request. Here’s our final configuration.

micronaut-5

During the tests of my application I have observed that using distributed configuration together with Zipkin tracing results in the problems in communication between microservice and Zipkin. The traces just do not appear in Zipkin. So, if you would like to test this feature now you must provide application.yml on the classpath and disable Consul distributed configuration for all your applications.

We can add some tags to the spans by using @ContinueSpan or @NewSpan annotations on methods.

After making some test calls of GET methods exposed by organization-service and department-service we may take a look on Zipkin web console, available under address http://192.168.99.100:9411. The following picture shows the list of all the traces sent to Zipkin by our microservices in 1 hour.

micronaut-7

We can check out the details of every trace by clicking on the element from the list. The following picture illustrates the timeline for HTTP method exposed by organization-service GET /organizations/{id}/with-departments-and-employees. This method finds the organization in the in-memory repository, and then calls HTTP method exposed by department-service GET /departments/organization/{organizationId}/with-employees. This method is responsible for finding all departments assigned to the given organization. It also needs to return employees within department, so it calls method GET /employees/department/{departmentId} from employee-service.

micronaut-8

We can also take a look on the details of every single call from the timeline.

micronaut-9

Conclusion

In comparison to Spring Boot Micronaut is still in the early stage of development. For example, I were not able to implement any application that could acts as an API gateway to our system, what can easily achieved with Spring using Spring Cloud Gateway or Spring Cloud Netflix Zuul. There are still some bugs that needs to be fixed. But above all that, Micronaut is now probably the most interesting micro-framework on the market. It implements most popular microservice patterns, provides integration with several third-party solutions like Consul, Eureka, Zipkin or Swagger, consumes less memory and starts faster than similar Spring Boot app. I will definitely follow the progress in Micronaut development closely.

Kotlin Microservice with Spring Boot

You may find many examples of microservices built with Spring Boot on my blog, but the most of them is written in Java. With the rise in popularity of Kotlin language it is more often used with Spring Boot for building backend services. Starting with version 5 Spring Framework has introduced first-class support for Kotlin. In this article I’m going to show you example of microservice build with Kotlin and Spring Boot 2. I’ll describe some interesting features of Spring Boot, which can treated as a set of good practices when building backend, REST-based microservices.

1. Configuration and dependencies

To use Kotlin in your Maven project you have to include plugin kotlin-maven-plugin, and /src/main/kotlin, /src/test/kotlin directories to the build configuration. We will also set -Xjsr305 compiler flag to strict. This option is responsible for checking support for JSR-305 annotations (for example @NotNull annotation).

<build>
	<sourceDirectory>${project.basedir}/src/main/kotlin</sourceDirectory>
	<testSourceDirectory>${project.basedir}/src/test/kotlin</testSourceDirectory>
	<plugins>
		<plugin>
			<groupId>org.jetbrains.kotlin</groupId>
			<artifactId>kotlin-maven-plugin</artifactId>
			<configuration>
				<args>
					<arg>-Xjsr305=strict</arg>
				</args>
				<compilerPlugins>
					<plugin>spring</plugin>
				</compilerPlugins>
			</configuration>
			<dependencies>
				<dependency>
					<groupId>org.jetbrains.kotlin</groupId>
					<artifactId>kotlin-maven-allopen</artifactId>
					<version>${kotlin.version}</version>
				</dependency>
			</dependencies>
		</plugin>
	</plugins>
</build>

We should also include some core Kotlin libraries like kotlin-stdlib-jdk8 and kotlin-reflect. They are provided by default for a Kotlin project on start.spring.io. For REST-based applications you will also need Jackson library used for JSON serialization/deserialization. Of course, we have to include Spring starters for Web application together with Actuator responsible for providing management endpoints.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
	<groupId>com.fasterxml.jackson.module</groupId>
	<artifactId>jackson-module-kotlin</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-reflect</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-stdlib-jdk8</artifactId>
</dependency>

We use the latest stable version of Spring Boot with Kotlin 1.2.71

<parent>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-parent</artifactId>
	<version>2.1.2.RELEASE</version>
</parent>
<properties>
	<java.version>1.8</java.version>
	<kotlin.version>1.2.71</kotlin.version>
</properties>

2. Building application

Let’s begin from the basics. If you are familiar with Spring Boot and Java, the biggest difference is in the main class declaration. You will call runApplication method outside Spring Boot application class. The main class, the same as in Java, is annotated with @SpringBootApplication.

@SpringBootApplication
class SampleSpringKotlinMicroserviceApplication

fun main(args: Array<String>) {
    runApplication<SampleSpringKotlinMicroserviceApplication>(*args)
}

Our sample application is very simple. It exposes some REST endpoints providing CRUD operations for model object. Even at this fragment of code illustrating controller implementation you can see some nice Kotlin features. We may use shortened function declaration with inferred return type. Annotation @PathVariable does not require any arguments. The input parameter name is considered to be the same as variable name. Of course, we are using the same annotations as with Java. In Kotlin, every property declared as having non-null type must be initialized in the constructor. So, if you are initializing it using dependency injection it has to declared as lateinit. Here’s the implementation of PersonController.

@RestController
@RequestMapping("/persons")
class PersonController {

    @Autowired
    lateinit var repository: PersonRepository

    @GetMapping("/{id}")
    fun findById(@PathVariable id: Int): Person? = repository.findById(id)

    @GetMapping
    fun findAll(): List<Person> = repository.findAll()

    @PostMapping
    fun add(@RequestBody person: Person): Person = repository.save(person)

    @PutMapping
    fun update(@RequestBody person: Person): Person = repository.update(person)

    @DeleteMapping("/{id}")
    fun remove(@PathVariable id: Int): Boolean = repository.removeById(id)

}

Kotlin automatically generates getters and setters for class properties declared as var. Also if you declare model as a data class it generate equals, hashCode, and toString methods. The declaration of our model class Person is very concise as shown below.

data class Person(var id: Int?, var name: String, var age: Int, var gender: Gender)

I have implemented my own in-memory repository class. I use Kotlin extensions for manipulating list of elements. This built-in Kotlin feature is similar to Java streams, with the difference that you don’t have to perform any conversion between Collection and Stream.

@Repository
class PersonRepository {
    val persons: MutableList<Person> = ArrayList()

    fun findById(id: Int): Person? {
        return persons.singleOrNull { it.id == id }
    }

    fun findAll(): List<Person> {
        return persons
    }

    fun save(person: Person): Person {
        person.id = (persons.maxBy { it.id!! }?.id ?: 0) + 1
        persons.add(person)
        return person
    }

    fun update(person: Person): Person {
        val index = persons.indexOfFirst { it.id == person.id }
        if (index >= 0) {
            persons[index] = person
        }
        return person
    }

    fun removeById(id: Int): Boolean {
        return persons.removeIf { it.id == id }
    }

}

The sample application source code is available on GitHub in repository https://github.com/piomin/sample-spring-kotlin-microservice.git.

3. Enabling Actuator endpoints

Since we have already included Spring Boot starter with Actuator into the application code, we can take advantage of its production-ready features. Spring Boot Actuator gives you very powerful tools for monitoring and managing your apps. You can provide advanced healthchecks, info endpoints or send metrics to numerous monitoring systems like InfluxDB. After including Actuator artifacts the only thing we have to do is to enable all its endpoint for our application via HTTP.

management.endpoints.web.exposure.include: '*'

We can customize Actuator endpoints to provide more details about our app. A good practice is to expose information about version and git commit to info endpoint. As usual Spring Boot provides auto-configuration for such features, so the only thing we need to do is to include some Maven plugins to build configuration in pom.xml. The goal build-info set for spring-boot-maven-plugin forces it to generate properties file with basic information about version. The file is located in directory META-INF/build-info.properties. Plugin git-commit-id-plugin will generate git.properties file in the root directory.

<plugin>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-maven-plugin</artifactId>
	<executions>
		<execution>
			<goals>
				<goal>build-info</goal>
			</goals>
		</execution>
	</executions>
</plugin>
<plugin>
	<groupId>pl.project13.maven</groupId>
	<artifactId>git-commit-id-plugin</artifactId>
	<configuration>
		<failOnNoGitDirectory>false</failOnNoGitDirectory>
	</configuration>
</plugin>

Now you should just build your application using mvn clean install command and then run it.

$ java -jar target\sample-spring-kotlin-microservice-1.0-SNAPSHOT.jar

The info endpoint is available under address http://localhost:8080/actuator/info. It exposes all interesting information for us.

{
	"git":{
		"commit":{
			"time":"2019-01-14T16:20:31Z",
			"id":"f7cb437"
		},
		"branch":"master"
	},
	"build":{
		"version":"1.0-SNAPSHOT",
		"artifact":"sample-spring-kotlin-microservice",
		"name":"sample-spring-kotlin-microservice",
		"group":"pl.piomin.services",
		"time":"2019-01-15T09:18:48.836Z"
	}
}

4. Enabling API documentation

Build info and git properties may be easily injected into the application code. It can be useful in some cases. One of that case is if you have enabled auto-generated API documentation. The most popular tools using for it is Swagger. You can easily integrate Swagger2 with Spring Boot using SpringFox Swagger project. First, you need to include the following dependencies to your pom.xml.

<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.9.2</version>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger-ui</artifactId>
	<version>2.9.2</version>
</dependency>

Then, you should enable Swagger by annotating configuration class with @EnableSwagger2. Required informations are available inside beans BuildProperties and GitProperties. We just have to inject them into Swagger configuration class as shown below. We set them as optional to prevent from application startup failure in case they are not present on classpath.

@Configuration
@EnableSwagger2
class SwaggerConfig {

    @Autowired
    lateinit var build: Optional<BuildProperties>
    @Autowired
    lateinit var git: Optional<GitProperties>

    @Bean
    fun api(): Docket {
        var version = "1.0"
        if (build.isPresent && git.isPresent) {
            var buildInfo = build.get()
            var gitInfo = git.get()
            version = "${buildInfo.version}-${gitInfo.shortCommitId}-${gitInfo.branch}"
        }
        return Docket(DocumentationType.SWAGGER_2)
                .apiInfo(apiInfo(version))
                .select()
                .apis(RequestHandlerSelectors.any())
                .paths{ it.equals("/persons")}
                .build()
                .useDefaultResponseMessages(false)
                .forCodeGeneration(true)
    }

    @Bean
    fun uiConfig(): UiConfiguration {
        return UiConfiguration(java.lang.Boolean.TRUE, java.lang.Boolean.FALSE, 1, 1, ModelRendering.MODEL, java.lang.Boolean.FALSE, DocExpansion.LIST, java.lang.Boolean.FALSE, null, OperationsSorter.ALPHA, java.lang.Boolean.FALSE, TagsSorter.ALPHA, UiConfiguration.Constants.DEFAULT_SUBMIT_METHODS, null)
    }

    private fun apiInfo(version: String): ApiInfo {
        return ApiInfoBuilder()
                .title("API - Person Service")
                .description("Persons Management")
                .version(version)
                .build()
    }

}

The documentation is available under context path /swagger-ui.html. Besides API documentation is displays the full information about application version, git commit id and branch name.

kotlin-microservices-1.PNG

5. Choosing your app server

Spring Boot Web can be ran on three different embedded servers: Tomcat, Jetty or Undertow. By default it uses Tomcat. To change the default server you just need include the suitable Spring Boot starter and exclude spring-boot-starter-tomcat. The good practice may be to enable switching between servers during application build. You can achieve it by declaring Maven profiles as shown below.

<profiles>
	<profile>
		<id>tomcat</id>
		<activation>
			<activeByDefault>true</activeByDefault>
		</activation>
		<dependencies>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-web</artifactId>
			</dependency>
		</dependencies>
	</profile>
	<profile>
		<id>jetty</id>
		<dependencies>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-web</artifactId>
				<exclusions>
					<exclusion>
						<groupId>org.springframework.boot</groupId>
						<artifactId>spring-boot-starter-tomcat</artifactId>
					</exclusion>
				</exclusions>
			</dependency>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-jetty</artifactId>
			</dependency>
		</dependencies>
	</profile>
	<profile>
		<id>undertow</id>
		<dependencies>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-web</artifactId>
				<exclusions>
					<exclusion>
						<groupId>org.springframework.boot</groupId>
						<artifactId>spring-boot-starter-tomcat</artifactId>
					</exclusion>
				</exclusions>
			</dependency>
			<dependency>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-starter-undertow</artifactId>
			</dependency>
		</dependencies>
	</profile>
</profiles>

Now, if you would like to enable other server than Tomcat for your application you should activate the appropriate profile during Maven build.

$ mvn clean install -Pjetty

Conclusion

Development of microservices using Kotlin and Spring Boot is nice and simple. Basing on the sample application I have introduces the main Spring Boot features for Kotlin. I also described some good practices you may apply to your microservices when building it using Spring Boot and Kotlin. You can compare described approach with some other micro-frameworks used with Kotlin, for example Ktor described in one of my previous articles Kotlin Microservices with Ktor.

Running Java Microservices on OpenShift using Source-2-Image

One of the reason you would prefer OpenShift instead of Kubernetes is the simplicity of running new applications. When working with plain Kubernetes you need to provide already built image together with the set of descriptor templates used for deploying it. OpenShift introduces Source-2-Image feature used for building reproducible Docker images from application source code. With S2I you don’t have provide any Kubernetes YAML templates or build Docker image by yourself, OpenShift will do it for you. Let’s see how it works. The best way to test it locally is via Minishift. But the first step is to prepare sample applications source code.

1. Prepare application code

I have already described how to run your Java applications on Kubernetes in one of my previous articles Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker. We will use the same source code as used in that article now, so you would be able to compare those two different approaches. Our source code is available on GitHub in repository sample-spring-microservices-new. We will modify a little the version used in Kubernetes by removing Spring Cloud Kubernetes library and including some additional resources. The current version is available in the branch openshift.
Our sample system consists of three microservices which communicate with each other and use Mongo database backend. Here’s the diagram that illustrates our architecture.

s2i-1

Every microservice is a Spring Boot application, which uses Maven as a built tool. After including spring-boot-maven-plugin it is able to generate single fat jar with all dependencies, which is required by source-2-image builder.

<build>
	<plugins>
		<plugin>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-maven-plugin</artifactId>
		</plugin>
	</plugins>
</build>

Every application includes starters for Spring Web, Spring Actuator and Spring Data MongoDB for integration with Mongo database. We will also include libraries for generating Swagger API documentation, and Spring Cloud OpenFeign for these applications which call REST endpoints exposed by other microservices.

<dependencies>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-web</artifactId>
	</dependency>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-actuator</artifactId>
	</dependency>
	<dependency>
		<groupId>io.springfox</groupId>
		<artifactId>springfox-swagger2</artifactId>
		<version>2.9.2>/version<
	</dependency>
	<dependency>
		<groupId>io.springfox</groupId>
		<artifactId>springfox-swagger-ui</artifactId>
		<version>2.9.2</version>
	</dependency>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-data-mongodb</artifactId>
	</dependency>
</dependencies>

Every Spring Boot application exposes REST API for simple CRUD operations on a given resource. The Spring Data repository bean is injected into the controller.

@RestController
@RequestMapping(“/employee”)
public class EmployeeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);
	
	@Autowired
	EmployeeRepository repository;
	
	@PostMapping("/")
	public Employee add(@RequestBody Employee employee) {
		LOGGER.info("Employee add: {}", employee);
		return repository.save(employee);
	}
	
	@GetMapping("/{id}")
	public Employee findById(@PathVariable("id") String id) {
		LOGGER.info("Employee find: id={}", id);
		return repository.findById(id).get();
	}
	
	@GetMapping("/")
	public Iterable<Employee> findAll() {
		LOGGER.info("Employee find");
		return repository.findAll();
	}
	
	@GetMapping("/department/{departmentId}")
	public List<Employee> findByDepartment(@PathVariable("departmentId") Long departmentId) {
		LOGGER.info("Employee find: departmentId={}", departmentId);
		return repository.findByDepartmentId(departmentId);
	}
	
	@GetMapping("/organization/{organizationId}")
	public List<Employee> findByOrganization(@PathVariable("organizationId") Long organizationId) {
		LOGGER.info("Employee find: organizationId={}", organizationId);
		return repository.findByOrganizationId(organizationId);
	}
	
}

The application expects to have environment variables pointing to the database name, user and password.

spring:
  application:
    name: employee
  data:
    mongodb:
      uri: mongodb://${MONGO_DATABASE_USER}:${MONGO_DATABASE_PASSWORD}@mongodb/${MONGO_DATABASE_NAME}

Inter-service communication is realized through OpenFeign declarative REST client. It is included in department and organization microservices.

@FeignClient(name = "employee", url = "${microservices.employee.url}")
public interface EmployeeClient {

	@GetMapping("/employee/organization/{organizationId}")
	List<Employee> findByOrganization(@PathVariable("organizationId") String organizationId);
	
}

The address of the target service accessed by Feign client is set inside application.yml. The communication is realized via OpenShift/Kubernetes services. The name of each service is also injected through an environment variable.

spring:
  application:
    name: organization
  data:
    mongodb:
      uri: mongodb://${MONGO_DATABASE_USER}:${MONGO_DATABASE_PASSWORD}@mongodb/${MONGO_DATABASE_NAME}
microservices:
  employee:
    url: http://${EMPLOYEE_SERVICE}:8080
  department:
    url: http://${DEPARTMENT_SERVICE}:8080

2. Running Minishift

To run Minishift locally you just have to download it from that site, copy minishift.exe (for Windows) to your PATH directory and start using minishift start command. For more details you may refer to my previous article about OpenShift and Java applications Quick guide to deploying Java apps on OpenShift. The current version of Minishift used during writing this article is 1.29.0.
After starting Minishift we need to run some additional oc commands to enable source-2-image for Java apps. First, we add some privileges to user admin to be able to access project openshift. In this project OpenShift stores all the build-in templates and image streams used, for example as S2I builders. Let’s begin from enable admin-user addon.

$ minishift addons apply admin-user

Thanks to that plugin we are able to login to Minishift as cluster admin. Now, we can grant role cluster-admin to user admin.

$ oc login -u system:admin
$ oc adm policy add-cluster-role-to-user cluster-admin admin
$ oc login -u admin -p admin

After that, you can login to web console using credentials admin/admin. You should be able to see project openshift. It is not all. The image used for building runnable Java apps (openjdk18-openshift) is not available by default on Minishift. We can import it manually from RedHat registry using oc import-image command or just enable and apply plugin xpaas. I prefer the second option.

$ minishift addons apply xpaas

Now, you can go to Minishift web console (for me available under address https://192.168.99.100:8443), select project openshift and then navigate to Builds -> Images. You should see the image stream redhat-openjdk18-openshift on the list.

s2i-2

The newest version of that image is 1.3. Surprisingly it is not the newest version on OpenShift Container Platform. There you have version 1.5. However, the newest versions of builder images has been moved to registry.redhat.io, which requires authentication.

3. Deploying Java app using S2I

We are finally able to deploy our app on Minishift with S2I builder. The application source code is ready, and the same with Minishift instance. The first step is to deploy an instance of MongoDB. It is very easy with OpenShift, because Mongo template is available in built-in service catalog. We can provide our own configuration settings or left default values. What’s important for us, OpenShift generates secret, by default available under the name mongodb.

s2i-3

The S2I builder image provided by OpenShift may be used by through the image stream redhat-openjdk18-openshift. This image is intended for use with Maven-based Java standalone projects that are run via main class, for example Spring Boot applications. If you would not provide any builder during creating new app the type of application is auto-detected by OpenShift, and source code written Java it will be jee deployed on WildFly server. The current version of the Java S2I builder image supports OpenJDK 1.8, Jolokia 1.3.5, and Maven 3.3.9-2.8.
Let’s create our first application on OpenShift. We begin from microservice employee. Under normal circumstances each microservice would be located in separated Git repository. In our sample all of them are placed in the single repository, so we have provide the location of current app by setting parameter --context-dir. We will also override default branch to openshift, which has been created for the purposes of this article.

$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=employee --context-dir=employee-service

All our microservices are connecting to Mongo database, so we also have to inject connection settings and credentials into application pod. It can achieved by injecting mongodb secret to BuildConfig object.

$ oc set env bc/employee --from="secret/mongodb" --prefix=MONGO_

BuildConfig is one of the OpenShift object created after running command oc new-app. It also creates DeploymentConfig with deployment definition, Service, and ImageStream with newest Docker image of application. After creating application a new build is running. First, it download source code from Git repository, then it builds it using Maven, assembles build results into the Docker image, and finally saves image in registry.
Now, we can create the next application – department. For simplification, all three microservices are connecting to the same database, which is not recommended under normal circumstances. In that case the only difference between department and employee app is the environment variable EMPLOYEE_SERVICE set as parameter on oc new-app command.

$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=department --context-dir=department-service -e EMPLOYEE_SERVICE=employee 

The same as before we also inject mongodb secret into BuildConfig object.

$ oc set env bc/department --from="secret/mongodb" --prefix=MONGO_

A build is starting just after creating a new application, but we can also start it manually by executing the following running command.

$ oc start-build department

Finally, we are deploying the last microservice. Here are the appropriate commands.

$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=organization --context-dir=organization-service -e EMPLOYEE_SERVICE=employee -e DEPARTMENT_SERVICE=department
$ oc set env bc/organization --from="secret/mongodb" --prefix=MONGO_

4. Deep look into created OpenShift objects

The list of builds may be displayed on web console under section Builds -> Builds. As you can see on the picture below there are three BuildConfig objects available – each one for the single application. The same list can be displayed using oc command oc get bc.

s2i-4

You can take a look on build history by selecting one of the element from the list. You can also start a new by clicking button Start Build as shown below.

s2i-5

We can always display YAML configuration file with BuildConfig definition. But it is also possible to perform the similar action using web console. The following picture shows the list of environment variables injected from mongodb secret into the BuildConfig object.

s2i-6.PNG

Every build generates Docker image with application and saves it in Minishift internal registry. Minishift internal registry is available under address 172.30.1.1:5000. The list of available image streams is available under section Builds -> Images.

s2i-7

Every application is automatically exposed on ports 8080 (HTTP), 8443 (HTTPS) and 8778 (Jolokia) via services. You can also expose these services outside Minishift by creating OpenShift Route using oc expose command.

s2i-8

5. Testing the sample system

To proceed with the tests we should first expose our microservices outside Minishift. To do that just run the following commands.

$ oc expose svc employee
$ oc expose svc department
$ oc expose svc organization

After that we can access applications on the address http://${APP_NAME}-${PROJ_NAME}.${MINISHIFT_IP}.nip.io as shown below.

s2i-9

Each microservice provides Swagger2 API documentation available on page swagger-ui.html. Thanks to that we can easily test every single endpoint exposed by the service.

s2i-10

It’s worth notice that every application making use of three approaches to inject environment variables into the pod:

  1. It stores version number in source code repository inside the file .s2i/environment. S2I builder reads all the properties defined inside that file and set them as environment variables for builder pod, and then application pod. Our property name is VERSION, which is injected using Spring @Value, and set for Swagger API (the code is visible below).
  2. I have already set the names of dependent services as ENV vars during executing command oc new-app for department and organization apps.
  3. I have also inject MongoDB secret into every BuildConfig object using oc set env command.
@Value("${VERSION}")
String version;

public static void main(String[] args) {
	SpringApplication.run(DepartmentApplication.class, args);
}

@Bean
public Docket swaggerApi() {
	return new Docket(DocumentationType.SWAGGER_2)
		.select()
			.apis(RequestHandlerSelectors.basePackage("pl.piomin.services.department.controller"))
			.paths(PathSelectors.any())
		.build()
		.apiInfo(new ApiInfoBuilder().version(version).title("Department API").description("Documentation Department API v" + version).build());
}

Conclusion

Today I show you that deploying your applications on OpenShift may be very simple thing. You don’t have to create any YAML descriptor files or build Docker images by yourself to run your app. It is built directly from your source code. You can compare it with deployment on Kubernetes described in one of my previous articles Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker.