Kotlin Microservices with Micronaut, Spring Cloud and JPA

Micronaut Framework provides support for Kotlin built upon Kapt compiler plugin. It also implements the most popular cloud-native patterns like distributed configuration, service discovery and client-side load balancing. These features allows to include your application built on top of Micronaut into the existing microservices-based system. The most popular example of such approach may be an integration with Spring Cloud ecosystem. If you have already used Spring Cloud, it is very likely you built your microservices-based architecture using Eureka discovery server and Spring Cloud Config as a configuration server. Beginning from version 1.1 Micronaut supports both these popular tools being a part of Spring Cloud project. That’s a good news, because in version 1.0 the only supported distributed solution was Consul, and there were no possibility to use Eureka discovery together with Consul property source (running them together ends with exception).

In this article you will learn how to:

  • Configure Micronaut Maven support for Kotlin using Kapt compiler
  • Implement microservices with Micronaut and Kotlin
  • Integrate Micronaut with Spring Cloud Eureka discovery server
  • Integrate Micronaut with Spring Cloud Config server
  • Configure JPA/Hibernate support for application built on top Micronaut
  • For simplification we run a single instance of PostgreSQL shared between all sample microservices

Our architecture is pretty similar to the architecture described in my previous article about Micronaut Quick Guide to Microservice with Micronaut Framework. We also have three microservice that communicate to each other. We use Spring Cloud Eureka and Spring Cloud Config for discovery and distributed configuration instead of Consul. Every service has backend store – PostgreSQL database. This architecture has been visualized on the following picture.

micronaut-2-arch (1).png

After that short introduction we may proceed to the development. Let’s begin from configuring Kotlin support for Micronaut.

1. Kotlin with Micronaut – configuration

Support for Kotlin with Kapt compiler plugin is described well on Micronaut docs site (https://docs.micronaut.io/1.1.0.M1/guide/index.html#kotlin). However, I decided to use Maven instead of Gradle, so our configuration will be slightly different than instructions for Gradle. We configure Kapt inside Maven plugin for Kotlin kotlin-maven-plugin. Thanks to that Kapt will create Java “stub” classes for each of your Kotlin classes, which can then be processed by Micronaut’s Java annotation processor. The Micronaut annotation processors are declared inside tag annotationProcessorPaths in the configuration section. Here’s the full Maven configuration to provide support for Kotlin. Besides core library micronaut-inject-java, we also use annotations from tracing, openapi and JPA libraries.

<plugin>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-maven-plugin</artifactId>
	<dependencies>
		<dependency>
			<groupId>org.jetbrains.kotlin</groupId>
			<artifactId>kotlin-maven-allopen</artifactId>
			<version>${kotlin.version}</version>
		</dependency>
	</dependencies>
	<configuration>
		<jvmTarget>1.8</jvmTarget>
	</configuration>
	<executions>
		<execution>
			<id>compile</id>
			<phase>compile</phase>
			<goals>
				<goal>compile</goal>
			</goals>
		</execution>
		<execution>
			<id>test-compile</id>
			<phase>test-compile</phase>
			<goals>
				<goal>test-compile</goal>
			</goals>
		</execution>
		<execution>
			<id>kapt</id>
			<goals>
				<goal>kapt</goal>
			</goals>
			<configuration>
				<sourceDirs>
					<sourceDir>src/main/kotlin</sourceDir>
				</sourceDirs>
				<annotationProcessorPaths>
					<annotationProcessorPath>
						<groupId>io.micronaut</groupId>
						<artifactId>micronaut-inject-java</artifactId>
						<version>${micronaut.version}</version>
					</annotationProcessorPath>
					<annotationProcessorPath>
						<groupId>io.micronaut.configuration</groupId>
						<artifactId>micronaut-openapi</artifactId>
						<version>${micronaut.version}</version>
					</annotationProcessorPath>
					<annotationProcessorPath>
						<groupId>io.micronaut</groupId>
						<artifactId>micronaut-tracing</artifactId>
						<version>${micronaut.version}</version>
					</annotationProcessorPath>
					<annotationProcessorPath>
						<groupId>javax.persistence</groupId>
						<artifactId>javax.persistence-api</artifactId>
						<version>2.2</version>
					</annotationProcessorPath>
					<annotationProcessorPath>
						<groupId>io.micronaut.configuration</groupId>
						<artifactId>micronaut-hibernate-jpa</artifactId>
						<version>1.1.0.RC2</version>
					</annotationProcessorPath>
				</annotationProcessorPaths>
			</configuration>
		</execution>
	</executions>
</plugin>

We also should not run maven-compiler-plugin during compilation phase. Kapt compiler generates Java classes, so we don’t need to run any other compilator during the build.

<plugin>
	<groupId>org.apache.maven.plugins</groupId>
	<artifactId>maven-compiler-plugin</artifactId>
	<configuration>
		<proc>none</proc>
		<source>1.8</source>
		<target>1.8</target>
	</configuration>
	<executions>
		<execution>
			<id>default-compile</id>
			<phase>none</phase>
		</execution>
		<execution>
			<id>default-testCompile</id>
			<phase>none</phase>
		</execution>
		<execution>
			<id>java-compile</id>
			<phase>compile</phase>
			<goals>
				<goal>compile</goal>
			</goals>
		</execution>
		<execution>
			<id>java-test-compile</id>
			<phase>test-compile</phase>
			<goals>
				<goal>testCompile</goal>
			</goals>
		</execution>
	</executions>
</plugin>

Finally, we will add Kotlin core library and Jackson module for JSON serialization.

<dependency>
	<groupId>com.fasterxml.jackson.module</groupId>
	<artifactId>jackson-module-kotlin</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-stdlib-jdk8</artifactId>
	<version>${kotlin.version}</version>
</dependency>

If you are running the application with Intellij you should first enable annotation processing. To do that go to Build, Execution, Deployment -> Compiler -> Annotation Processors as shown below.

micronaut-2-1

2. Running Postgres

Before proceeding to the development we have to start instance of PostgreSQL database. It will be started as a Docker container. For me, PostgreSQL is now available under address 192.168.99.100:5432, because I’m using Docker Toolbox.

$ docker run -d --name postgres -e POSTGRES_USER=micronaut -e POSTGRES_PASSWORD=123456 -e POSTGRES_DB=micronaut -p 5432:5432 postgres

3. Enabling Hibernate for Micronaut

Hibernate configuration is a little harder for Micronaut than for Spring Boot. We don’t have any projects like Spring Data JPA, where almost all is auto-configured. Besides specific JDBC driver for integration with database, we have to include the following dependencies. We may choose between three available libraries providing datasource implementation: Tomcat, Hikari or DBCP.

<dependency>
	<groupId>org.postgresql</groupId>
	<artifactId>postgresql</artifactId>
	<version>42.2.5</version>
</dependency>
<dependency>
	<groupId>io.micronaut.configuration</groupId>
	<artifactId>micronaut-jdbc-hikari</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut.configuration</groupId>
	<artifactId>micronaut-hibernate-jpa</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut.configuration</groupId>
	<artifactId>micronaut-hibernate-validator</artifactId>
</dependency>

The next step is to provide some configuration settings. All the properties will be stored on the configuration server. We have to set database connection settings and credentials. The JPA configuration settings are provided under jpa.* key. We force Hibernate to update database on application startup and print all the SQL logs (only for tests).

datasources:
  default:
    url: jdbc:postgresql://192.168.99.100:5432/micronaut?ssl=false
    username: micronaut
    password: 123456
    driverClassName: org.postgresql.Driver
jpa:
  default:
    packages-to-scan:
      - 'pl.piomin.services.department.model'
    properties:
      hibernate:
        hbm2ddl:
          auto: update
        show_sql: true

Here’s our sample domain object.

@Entity
data class Department(@Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "department_id_seq") @SequenceGenerator(name = "department_id_seq", sequenceName = "department_id_seq") var id: Long,
                      var organizationId: Long, var name: String) {

    @Transient
    var employees: MutableList<Employee> = mutableListOf()

}

The repository bean needs to inject EntityManager using @PersistentContext and @CurrentSession annotations. All functions needs to be annotated with @Transactional, what requires the methods not to be final (open modifier in Kotlin).

@Singleton
open class DepartmentRepository(@param:CurrentSession @field:PersistenceContext val entityManager: EntityManager) {

    @Transactional
    open fun add(department: Department): Department {
        entityManager.persist(department)
        return department
    }

    @Transactional(readOnly = true)
    open fun findById(id: Long): Department = entityManager.find(Department::class.java, id)

    @Transactional(readOnly = true)
    open fun findAll(): List<Department> = entityManager.createQuery("SELECT d FROM Department d").resultList as List<Department>

    @Transactional(readOnly = true)
    open fun findByOrganization(organizationId: Long) = entityManager.createQuery("SELECT d FROM Department d WHERE d.organizationId = :orgId")
            .setParameter("orgId", organizationId)
            .resultList as List<Department>

}

4. Running Spring Cloud Config Server

Running Spring Cloud Config server is very simple. I have already described that in some of my previous articles. All those were prepared for Java, while today we start it as Kotlin application. Here’s our main class. It should be annotated with @EnableConfigServer.

@SpringBootApplication
@EnableConfigServer
class ConfigApplication

fun main(args: Array<String>) {
    runApplication<ConfigApplication>(*args)
}

Besides Kotlin core dependency we need to include artifact spring-cloud-config-server.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-config-server</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-stdlib-jdk8</artifactId>
	<version>${kotlin.version}</version>
</dependency>

By default, config server tries to use Git as properties source backend. We prefer using classpath resources, what’s much simpler for our tests. To do that, we have to enable native profile. We will also set server port to 8888.

spring:
  application:
    name: config-service
  profiles:
    active: native
server:
  port: 8888

If you place all configuration under directory /src/main/resources/config they will be automatically load after start.

micronaut-2-2

Here’s configuration file for department-service.

micronaut:
  server:
    port: -1
  router:
    static-resources:
      swagger:
        paths: classpath:META-INF/swagger
        mapping: /swagger/**
datasources:
  default:
    url: jdbc:postgresql://192.168.99.100:5432/micronaut?ssl=false
    username: micronaut
    password: 123456
    driverClassName: org.postgresql.Driver
jpa:
  default:
    packages-to-scan:
      - 'pl.piomin.services.department.model'
    properties:
      hibernate:
        hbm2ddl:
          auto: update
        show_sql: true
endpoints:
  info:
    enabled: true
    sensitive: false
eureka:
  client:
    registration:
      enabled: true
    defaultZone: "localhost:8761"

5. Running Eureka Server

Eureka server will also be run as Spring Boot application written in Kotlin.

@SpringBootApplication
@EnableEurekaServer
class DiscoveryApplication

fun main(args: Array<String>) {
    runApplication<DiscoveryApplication>(*args)
}

We also needs to include a single dependency spring-cloud-starter-netflix-eureka-server besides kotlin-stdlib-jdk8.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-stdlib-jdk8</artifactId>
	<version>${kotlin.version}</version>
</dependency>

We run standalone instance of Eureka on port 8761.

spring:
  application:
    name: discovery-service
server:
  port: 8761
eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/

6. Integrating Micronaut with Spring Cloud

The implementation of distributed configuration client is automatically included to Micronaut core. We only need to include module for service discovery.

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-discovery-client</artifactId>
</dependency>

We don’t have to place anything in the source code. All the features can be enabled via configuration settings. First, we need to enable config client by setting property micronaut.config-client.enabled to true. The next step is to enable specific implementation of config client – in that case Spring Cloud Config, and then set target url.

micronaut:
  application:
    name: department-service
  config-client:
    enabled: true
spring:
  cloud:
    config:
      enabled: true
      uri: http://localhost:8888/

Each application fetches properties from configuration server. The part of configuration responsible for enabling discovery based on Eureka server is visible below.

eureka:
  client:
    registration:
      enabled: true
    defaultZone: "localhost:8761"

7. Running applications

Kapt needs to be able to compile Kotlin code to Java succesfully. That’s why we place method inside class declaration, and annotate it with @JvmStatic. The main class visible below is also annotated with @OpenAPIDefinition in order to generate Swagger definition for API methods.

@OpenAPIDefinition(
        info = Info(
                title = "Departments Management",
                version = "1.0",
                description = "Department API",
                contact = Contact(url = "https://piotrminkowski.wordpress.com", name = "Piotr Mińkowski", email = "piotr.minkowski@gmail.com")
        )
)
open class DepartmentApplication {

    companion object {
        @JvmStatic
        fun main(args: Array<String>) {
            Micronaut.run(DepartmentApplication::class.java)
        }
    }
	
}

Here’s the controller class from department-service. It injects repository bean for database integration and EmployeeClient for HTTP communication with employee-service.

@Controller("/departments")
open class DepartmentController(private val logger: Logger = LoggerFactory.getLogger(DepartmentController::class.java)) {

    @Inject
    lateinit var repository: DepartmentRepository
    @Inject
    lateinit var employeeClient: EmployeeClient

    @Post
    fun add(@Body department: Department): Department {
        logger.info("Department add: {}", department)
        return repository.add(department)
    }

    @Get("/{id}")
    fun findById(id: Long): Department? {
        logger.info("Department find: id={}", id)
        return repository.findById(id)
    }

    @Get
    fun findAll(): List<Department> {
        logger.info("Department find")
        return repository.findAll()
    }

    @Get("/organization/{organizationId}")
    @ContinueSpan
    open fun findByOrganization(@SpanTag("organizationId") organizationId: Long): List<Department> {
        logger.info("Department find: organizationId={}", organizationId)
        return repository.findByOrganization(organizationId)
    }

    @Get("/organization/{organizationId}/with-employees")
    @ContinueSpan
    open fun findByOrganizationWithEmployees(@SpanTag("organizationId") organizationId: Long): List<Department> {
        logger.info("Department find: organizationId={}", organizationId)
        val departments = repository.findByOrganization(organizationId)
        departments.forEach { it.employees = employeeClient.findByDepartment(it.id) }
        return departments
    }

}

It is worth to take a look on HTTP client implementation. It has been discussed in the details in my last article about Micronaut Quick Guide to Microservice with Micronaut Framework.

@Client(id = "employee-service", path = "/employees")
interface EmployeeClient {

	@Get("/department/{departmentId}")
	fun findByDepartment(departmentId: Long): MutableList<Employee>
	
}

You can run all the microservice using IntelliJ. You may also build the whole project with Maven using mvn clean install command, and then run them using java -jar command. Thanks to maven-shade-plugin applications will be generated as uber jars. Then run them in the following order: config-service, discovery-service and microservices.

$ java -jar config-service/target/config-service-1.0-SNAPSHOT.jar
$ java -jar discovery-service/target/discovery-service-1.0-SNAPSHOT.jar
$ java -jar employee-service/target/employee-service-1.0-SNAPSHOT.jar
$ java -jar department-service/target/department-service-1.0-SNAPSHOT.jar
$ java -jar organization-service/target/organization-service-1.0-SNAPSHOT.jar

After you may take a look on Eureka dashboard available under address http://localhost:8761 to see the list of running services. You may also perform some tests by running HTTP API methods.

micronaut-2-3

Summary

The sample applications source code is available on GitHub in the repository sample-micronaut-microservices in the branch kotlin. You can refer to that repository for more implementation details that has not been included in the article.

Advertisements

Secure Spring Cloud Microservices with Vault and Nomad

One of the significant topics related to microservices security is managing and protecting sensitive data like tokens, passwords or certificates used by your application. As a developer you probably often implement a software that connects with external databases, message brokers or just the other applications. How do you store the credentials used by your application? To be honest, most of the software code I have seen in my life just stored a sensitive data as a plain text in the configuration files. Thanks to that, I could always be able to retrieve the credentials to every database I needed at a given time just by looking at the application source code. Of course, we can always encrypt sensitive data, but if we working with many microservices having separated databases I may not be very comfortable solution.

Today I’m going to show you how to integrate you Spring Boot application with HashiCorp’s Vault in order to store your sensitive data properly. The first good news is that you don’t have to create any keys or certificates for encryption and decryption, because Vault will do it in your place. In this article in a few areas I’ll refer to my previous article about HashiCorp’s solutions Deploying Spring Cloud Microservices on HashiCorp’s Nomad. Now, as then, I also deploy my sample applications on Nomad to take an advantage of build-in integration between those two very interesting HashiCorp’s tools. We will also use another HashiCorp’s solution for service discovery in inter-service communication – Consul. It’s also worth mentioning that Spring Cloud provides a dedicated project for integration with Vault – Spring Cloud Vault.

Architecture

The sample presented in this article will consists of two applications deployed on HashiCorp’s Nomad callme-service and caller-service. Microservice caller-service is calling endpoint exposed by callme-service. An inter-service communication is performed using the name of target application registered in Consul server. Microservice callme-service will store the history of all interactions triggered by caller-service in database. The credentials to database are stored on Vault. Nomad is integrated with Vault and store root token, which is not visible by the applications. The architecture of described solution is visible on the following picture.

vault-1

The current sample is pretty similar to the sample presented in my article Deploying Spring Cloud Microservices on Hashicorp’s Nomad. It is also available in the same repository on GitHub sample-nomad-java-service, but in the different branch vault. The current sample add an integration with PostgreSQL and Vault server for managing credentials to database.

1. Running Vault

We will run Vault inside Docker container in a development mode. Server in development mode does not require any further setup, it is ready to use just after startup. It provides in-memory encrypted storage and unsecure (HTTP) connection, which is not a problem for a demo purposes. We can override default server IP address and listening port by setting environment property VAULT_DEV_LISTEN_ADDRESS, but we won’t do that. After startup our instance of Vault is available on port 8200. We can use admin web console, which is for me available under address http://192.168.99.100:8200. The current version of Vault is 1.0.0.

$ docker run --cap-add=IPC_LOCK -d --name vault -p 8200:8200 vault

It is possible to login using different methods, but the most suitable way for us is through a token. To do that we have to display container logs using command docker logs vault, and then copy Root Token as shown below.

vault-1

Now you can login to Vault web console.

vault-2

2. Integration with Postgres database

In Vault we can create Secret Engine that connects to other services and generates dynamic credentials on demand. Secrets engines are available under path. There is the dedicated engine for the various databases, for example PostgreSQL. Before activating such an engine we should run an instance of Postgres database. This time we will also use Docker container. It is possible to set login and password to the database using environment variables.

$ docker run -d --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=postgres123456 -e POSTGRES_USER=postgres postgres

After starting database, we may proceed to the engine configuration in Vault web console. First, let’s create our first secret engine. We may choose between some different types of engine. The right choice for now is Databases.

vault-3

You can apply a new configuration to Vault using vault command or by HTTP API. Vault web console provides terminal for running CLI commands, but it could be problematic in some cases. For example, I have a problem with escaping strings in some SQL commands, and therefore I had to add it using HTTP API. No matter which method you use, the next steps are the same. Following Vault documentation we first need to configure plugin for PostgreSQL database and then provide connection settings and credentials.

$ vault write database/config/postgres plugin_name=postgresql-database-plugin allowed_roles="default" connection_url="postgresql://{{username}}:{{password}}@192.168.99.100:5432?sslmode=disable" username="postgres" password="postgres123456"

Alternatively, you can perform the same action using HTTP API method. To authenticate against Vault we need to add header X-Vault-Token with root token. I have disabled SSL for connection with Postgres by setting sslmode=disable. There is only one role allowed to use this plugin: default. Now, let’s configure that role.

$ curl --header "X-Vault-Token: s.44GiacPqbV78fNbmoWK4mdYq" --request POST --data '{"plugin_name": "postgresql-database-plugin","allowed_roles": "default","connection_url": "postgresql://{{username}}:{{password}}@localhost:5432?sslmode=disable","username": "postgres","password": "postgres123456"}' http://192.168.99.100:8200/v1/database/config/postgres

The role can created either with CLI or with HTTP API. The name of role should the same as the name passed in field allowed_roles in the previous step. We also have to set target database name and SQL statement that creates user with privileges.

$ vault write database/roles/default db_name=postgres creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';GRANT SELECT, UPDATE, INSERT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";GRANT USAGE,  SELECT ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";" default_ttl="1h" max_ttl="24h"

Alternatively you can call the following HTTP API endpoint.

$ curl --header "X-Vault-Token: s.44GiacPqbV78fNbmoWK4mdYq" --request POST --data '{"db_name":"postgres", "creation_statements": ["CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';GRANT SELECT, UPDATE, INSERT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";"]}' http://192.168.99.100:8200/v1/database/roles/default

And it’s all. Now, we can test our configuration using command with role’s name vault read database/creds/default as shown below. You can login to database using returned credentials. By default, they are valid for one hour.

vault-5

3. Enabling Spring Cloud Vault

We have succesfully configured secret engine that is responsible for creating user on Postgres. Now, we can proceed to the development and integrate our application with Vault. Fortunately, there is a project Spring Cloud Vault, which provides out-of-the-box integration with Vault database secret engines. The only thing we have to do is to include Spring Cloud Vault to our project and provide some configuration settings. Let’s start from setting Spring Cloud Release Train. We use the newest stable version Finchley.SR2.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Finchley.SR2</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

We have to include two dependencies to our pom.xml. Starter spring-cloud-starter-vault-config is responsible for loading configuration from Vault and spring-cloud-vault-config-databases responsible for integration with secret engines for databases.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-vault-config</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-vault-config-databases</artifactId>
</dependency>

The sample application also connects to Postgres database, so we will include the following dependencies.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
	<groupId>org.postgresql</groupId>
	<artifactId>postgresql</artifactId>
	<version>42.2.5</version>
</dependency>

The only thing we have to do is to configure integration with Vault via Spring Cloud Vault. The following configuration settings should be placed in bootstrap.yml (no application.yml). Because we run our application on Nomad server, we use the port number dynamically set by Nomad available under environment property NOMAD_HOST_PORT_http and secret token from Vault available under environment property VAULT_TOKEN.

server:
  port: ${NOMAD_HOST_PORT_http:8091}

spring:
  application:
    name: callme-service
  cloud:
    vault:
      uri: http://192.168.99.100:8200
      token: ${VAULT_TOKEN}
      postgresql:
        enabled: true
        role: default
        backend: database
  datasource:
    url: jdbc:postgresql://192.168.99.100:5432/postgres

The important part of the configuration visible above is under the property spring.cloud.vault.postgresql. Following Spring Cloud documentation “Username and password are stored in spring.datasource.username and spring.datasource.password so using Spring Boot will pick up the generated credentials for your DataSource without further configuration”. Spring Cloud Vault is connecting with Vault, and then using role default (previously created on Vault) to generate new credentials to database. Those credentials are injected into spring.datasource properties. Then, the application is connecting to database using injected credentials. Everything works fine. Now, let’s try to run our applications on Nomad.

4. Deploying apps on Nomad

Before starting Nomad node we should also run Consul using its Docker container. Here’s Docker command that starts single node Consul instance.

$ docker run -d --name consul -p 8500:8500 consul

After that we can configure connection settings to Consul and Vault in Nomad configuration. I have create the file nomad.conf. Nomad is authenticating itself against Vault using root token. Connection with Consul is not secured. Sometimes it is also required to set network interface name and total CPU on the machine for Nomad client. Most clients are able to determine it automatically, but it does not work for me.

client {
  network_interface = "Połączenie lokalne 4"
  cpu_total_compute = 10400
}

consul {
  address = "192.168.99.100:8500"
}

vault {
  enabled = true
  address = "http://192.168.99.100:8200"
  token = "s.6jhQ1WdcYrxpZmpa0RNd0LMw"
}

Let’s run Nomad in development mode passing configuration file location.

$ nomad agent -dev -config=nomad.conf

If everything works fine you should see the similar log on startup.

vault-6

Once we have succesfully started Nomad agent integrated with Consul and Vault, we can proceed to the applications deployment. First build the whole project with mvn clean install command. The next step is to prepare Nomad’s job descriptor file. For more details about Nomad deployment process and its descriptor file you can refer to my previous article about it (mentioned in the preface of this article). Descriptor file is available inside application GitHub under path callme-service/job.nomad for callme-service, and caller-service/job.nomad for caller-service.

job "callme-service" {
	datacenters = ["dc1"]
	type = "service"
	group "callme" {
		count = 2
		task "api" {
			driver = "java"
			config {
				jar_path    = "C:\\Users\\minkowp\\git\\sample-nomad-java-services-idea\\callme-service\\target\\callme-service-1.0.0-SNAPSHOT.jar"
				jvm_options = ["-Xmx256m", "-Xms128m"]
			}
			resources {
				cpu    = 500 # MHz
				memory = 300 # MB
				network {
					port "http" {}
				}
			}
			service {
				name = "callme-service"
				port = "http"
			}
			vault {
				policies = ["nomad"]
			}
		}
		restart {
			attempts = 1
		}
	}
}

You will have to change value of jar_path property with your path of application binaries. Before applying this deployment to Nomad we will have to add some additional configuration on Vault. When adding integration with Vault we have to pass the name of policies used for checking permissions. I set the policy with name nomad, which now has to created in Vault. Our application requires a permission for reading paths /secret/* and /database/* as shown below.

vault-7

Finally, we can deploy our application callme-service on Nomad by executing the following command.

$ nomad job run job.nomad

The similar descriptor file is available for caller-service, so we can also deploy it. All the microservice has been registered in Consul as shown below.

vault-8

Here are the list of registered instances of caller-service. As you can see on the picture below it is available under port 25816.

vault-9

You can also take a look on Nomad jobs view.

vault-10

Up-to-date cache with EclipseLink and Oracle

One of the most useful feature provided by ORM libraries is a second-level cache, usually called L2. L2 object cache reduces database access for entities and their relationships. It is enabled by default in the most popular JPA implementations like Hibernate or EclipseLink. That won’t be a problem, unless a table inside a database is not modified directly by third-party applications, or by the other instance of the same application in a clustered environment. One of the available solutions to this problem is in-memory data grid, which stores all data in a memory, and is distributed across many nodes inside a cluster. Such a tools like Hazelcast or Apache Ignite has been described several times in my blog. If you are interested in one of that tools I recommend you read one of my previous article bout it: Hazelcast Hot Cache with Striim.

However, we won’t discuss about it in this article. Today, I would like to talk about Continuous Query Notification feature provided by Oracle Database. It solves a problem with updating or invalidating a cache when the data changes in the database. Oracle JDBC drivers provide support for it since 11g Release 1. This functionality is based on receiving invalidation events from the JDBC drivers. Fortunately, EclipseLink extends that feature in their solution called EclipseLink Database Change Notification. In this article I’m going to show you how to implement it using Spring Data JPA together with EclipseLink library.

How it works

The most useful functionality provided by the Oracle Database Continuous Query Notification is an ability to raise database events when rows in a table were modified. It enables client applications to register queries with the database and receive notifications in response to DML or DDL changes on the objects associated with the queries. To detect modifications, EclipseLink DCN uses Oracle ROWID to intercept changes in the table. ROWID is included to all queries for a DCN-enabled class. EclipseLink also retrieves ROWID of saved entity after an insert operation, and maintains a cache index on that ROWID. It also selects the database transaction ID once for each transaction to avoid invalidating the cache during the processing of transaction.

When a database sends a notification it usually contains the followoing information:

  • Names of the modifying objects, for example a name of changed table
  • Type of change. The possible values are INSERT, UPDATE, DELETE, ALTER TABLE, or DROP TABLE
  • Oracle’s ROWID of changed record

Running Oracle database locally

Before starting working on our sample application we need to have Oracle database installed. Fortunately, there are some Docker images with Oracle Standard Edition 12c. The command visible below starts Oracle XE version and exposes it on default 1521 port. It is also possible to use web console available under port 9080.

$ docker run -d --name oracle -p 9080:8080 -p 1521:1521 sath89/oracle-12c

We need to have sysdba role in order to be able to grant privilege CHANGE NOTIFICATION to our database. The default password for user system is oracle.

GRANT CHANGE NOTIFICATION TO PIOMIN;

You may use any Oracle client like Oracle SQL Developer to connect with database or just login to a web console. Since I run Docker on Windows it is available on my laptop under address http://192.168.99.100:9080/em. Of course it is Oracle, so you need to settle in for a long haul, and wait until it starts. You can observer a progress of an installation by running command docker logs -f oracle. When you finally see a “100% complete” log entry you may grant the required privileges to the existing user or create a new one with a set of needed privileges, and proceed to the next step.

Sample application

The sample application source code is available on GitHub under address https://github.com/piomin/sample-eclipselink-jpa.git. It is Spring Boot application that uses Spring Data JPA as a data access layer implementation. Because the default JPA provider used in that project is EclipseLink, we should remember about excluding Hibernate libraries from starters spring-boot-starter-data-jpa and spring-boot-starter-web. Besides a standard EclipseLink library for JPA, we also have to include EclipseLink implementation for Oracle database (org.eclipse.persistence.oracle) and Oracle JDBC driver.

<dependency>
	<groupId>org.eclipse.persistence</groupId>
	<artifactId>org.eclipse.persistence.jpa</artifactId>
	<version>2.7.1</version>
</dependency>
<dependency>
	<groupId>org.eclipse.persistence</groupId>
	<artifactId>org.eclipse.persistence.oracle</artifactId>
	<version>2.7.1</version>
</dependency>
<dependency>
	<groupId>com.oracle</groupId>
	<artifactId>ojdbc7</artifactId>
	<version>12.1.0.1</version>
</dependency>

The next step is to provide connection settings to Oracle database launched as a Docker container. Do not try to do it through application.yml properties, because Spring Boot by default uses HikariCP for connection pooling. This in turn causes a conflict with Oracle datasource during application bootstrap. The following datasource declaration would work succesfully.

@Bean
public DataSource dataSource() {
	final DriverManagerDataSource dataSource = new DriverManagerDataSource();
	dataSource.setDriverClassName("oracle.jdbc.driver.OracleDriver");
	dataSource.setUrl("jdbc:oracle:thin:@192.168.99.100:1521:xe");
	dataSource.setUsername("piomin");
	dataSource.setPassword("Piot_123");
	return dataSource;
}

EclipseLink with Database Change Notification

EclipseLink needs some specific configuration settings to succesfully work with Spring Boot and Spring Data JPA. These settings may be provided inside @Configuration class that extends JpaBaseConfiguration class. First, we should set EclipseLinkJpaVendorAdapter as default JPA vendor adapter. Then, we may configure some additional JPA settings like detailed logging level or automatic creation of database objects during application startup. However, the most important thing for us in the fragment of source code visible below is Oracle Continuous Query Notification settings.
EclipseLink CQN support is enabled by the OracleChangeNotificationListener listener which integrates with Oracle JDBC in order to received database change notifications. The full class name of the listener should be passed as a value of eclipselink.cache.database-event-listener property. EclipseLink by default enabled L2 cache for all entities, and respectively all tables in the persistence unit are registered for a change notification. You may exclude some of them by using the databaseChangeNotificationType attribute of the @Cache annotation on the selected entity.

@Configuration
@EnableAutoConfiguration
public class JpaConfiguration extends JpaBaseConfiguration {

	protected JpaConfiguration(DataSource dataSource, JpaProperties properties, ObjectProvider jtaTransactionManager, ObjectProvider transactionManagerCustomizers) {
		super(dataSource, properties, jtaTransactionManager, transactionManagerCustomizers);
	}

	@Override
	protected AbstractJpaVendorAdapter createJpaVendorAdapter() {
		return new EclipseLinkJpaVendorAdapter();
	}

	@Override
	protected Map getVendorProperties() {
	    HashMap map = new HashMap();
	    map.put(PersistenceUnitProperties.WEAVING, InstrumentationLoadTimeWeaver.isInstrumentationAvailable() ? "true" : "static");
	    map.put(PersistenceUnitProperties.DDL_GENERATION, "create-or-extend-tables");
	    map.put(PersistenceUnitProperties.LOGGING_LEVEL, SessionLog.FINEST_LABEL);
	    map.put(PersistenceUnitProperties.DATABASE_EVENT_LISTENER, "org.eclipse.persistence.platform.database.oracle.dcn.OracleChangeNotificationListener");
	    return map;
	}

}

What is worth mentioning EclipseLink’s CQN integration has some important limitations:

  • Changes to an object’s secondary tables will not trigger it to be invalidate unless a version is used and updated in the primary table
  • Changes to an object’s OneToMany, ManyToMany, and ElementCollection relationships will not trigger it to be invalidate unless a version is used and updated in the primary table

The conclusion from these limitations is obvious. We should enable optimistic locking by including an @Version in our entities. The column with @Version in the primary table will always be updated, and the object will always be invalidated. There are three entities implemented. Entity Order is in many-to-one relationship with Product and Customer entities. All these classes has @Version feature enabled.

@Entity
@Table(name = "JPA_ORDER")
public class Order {

	@Id
	@SequenceGenerator(sequenceName = "SEQ_ORDER", allocationSize = 1, initialValue = 1, name = "orderSequence")
	@GeneratedValue(generator = "orderSequence", strategy = GenerationType.SEQUENCE)
	private Long id;
	@ManyToOne
	private Customer customer;
	@ManyToOne
	private Product product;
	@Enumerated
	private OrderStatus status;
	private int count;

	@Version
	private long version;

	public Long getId() {
		return id;
	}

	public void setId(Long id) {
		this.id = id;
	}

	public Customer getCustomer() {
		return customer;
	}

	public void setCustomer(Customer customer) {
		this.customer = customer;
	}

	public Product getProduct() {
		return product;
	}

	public void setProduct(Product product) {
		this.product = product;
	}

	public OrderStatus getStatus() {
		return status;
	}

	public void setStatus(OrderStatus status) {
		this.status = status;
	}

	public int getCount() {
		return count;
	}

	public void setCount(int count) {
		this.count = count;
	}

	public long getVersion() {
		return version;
	}

	public void setVersion(long version) {
		this.version = version;
	}

	@Override
	public String toString() {
		return "Order [id=" + id + ", product=" + product + ", status=" + status + ", count=" + count + "]";
	}

}

Testing

After launching your application you see the following logs generated with Finest level.

[EL Finest]: connection: 2018-03-23 15:45:50.591--ServerSession(465621833)--Thread(Thread[main,5,main])--Registering table [JPA_PRODUCT] for database change event notification.
[EL Finest]: connection: 2018-03-23 15:45:50.608--ServerSession(465621833)--Thread(Thread[main,5,main])--Registering table [JPA_CUSTOMER] for database change event notification.
[EL Finest]: connection: 2018-03-23 15:45:50.616--ServerSession(465621833)--Thread(Thread[main,5,main])--Registering table [JPA_ORDER] for database change event notification.

The registration are stored in table user_change_notification_regs, which is available for your application’s user (PIOMIN).

$ SELECT regid, table_name FROM user_change_notification_regs;
     REGID TABLE_NAME
---------- ---------------------------------------------------------------
       326 PIOMIN.JPA_PRODUCT
       326 PIOMIN.JPA_CUSTOMER
       326 PIOMIN.JPA_ORDER

Our sample application exposes Swagger documentation of API, which may be accessed under address http://localhost:8090/swagger-ui.html. You can create or find some entities using it. If try to find the same entity several times you would see that the only first invoke generates SQL query in logs, while all others are taken from a cache. Now, try to change that record using any Oracle’s client like Oracle SQL Developer, and verify if cache has been succesfully refreshed.

eclipse-link-1

Summary

When I first heard about Oracle Database Change Notification supported by EclipseLink JPA vendor, my expectations were really high. It is very interesting solution, which guarantees automatic cache refresh after changes performed on database tables by third-party application avoiding your cache. However, I had some problems with that solution during tests. In some cases it just doesn’t work, and the detection of errors was really troublesome. It would be fine if such a solution could be also available for other databases than Oracle and JPA vendors like Hibernate.