Part 1: Testing Kafka Microservices With Micronaut

I have already described how to build microservices architecture entirely based on message-driven communication through Apache Kafka in one of my previous articles Kafka In Microservices With Micronaut. As you can see in the article title the sample applications and integration with Kafka has been built on top of Micronaut Framework. I described some interesting features of Micronaut, that can be used for building message-driven microservices, but I specially didn’t write anything about testing. In this article I’m going to show you how to test your Kafka microservice using Micronaut Test core features (Component Tests), Testcontainers (Integration Tests) and Pact (Contract Tests).

Continue reading “Part 1: Testing Kafka Microservices With Micronaut”

Advertisements

Deploying Spring Boot Application on OpenShift with Dekorate

More advanced deployments to Kubernetes or OpenShift are a bit troublesome for developers. In comparison to Kubernetes OpenShift provides S2I (Source-2-Image) mechanism, which may help reduce a time required for preparation of application deployment descriptors. Although S2I is quite useful for developers, it solves only simple use cases and does not provide unified approach to building deployment configuration from a source code. Dekorate (https://dekorate.io), the recently created open-source project, tries to solve that problem. This project seems to be very interesting. It appears to be confirmed by RedHat, which has already announced a decision on including Dekorate to Red Hat OpenShift Application Runtimes as a “Tech Preview”. You can read more about it in this article: https://developers.redhat.com/blog/2019/08/15/how-to-use-dekorate-to-create-kubernetes-manifests.

Continue reading “Deploying Spring Boot Application on OpenShift with Dekorate”

Quick Guide to Microservices with Quarkus on Openshift

You had an opportunity to read many articles about building microservices with such frameworks like Spring Boot or Micronaut on my blog. There is another one very interesting framework dedicated for microservices architecture, which is becoming increasing popular – Quarkus. It is being introduced as a next-generation Kubernetes/Openshift native Java framework. It is built on top of well-known Java standards like CDI, JAX-RS and Eclipse MicroProfile which distinguishes it from Spring Boot.
Some other features that may convince you to use Quarkus are extremely fast boot time, minimal memory footprint optimized for running in containers, and lower time-to-first-request. Also, despite the fact that it is relatively new framework (the current version is 0.21), it has a lot of extensions including support for Hibernate, Kafka, RabbitMQ, Openapi, Vert.x and many more.
In this article I’m going to guide you through building microservices with Quarkus and running them on OpenShift (via Minishift). We will cover the following topics:

  • Building REST-based application with input validation
  • Communication between microservices with RestClient
  • Exposing health checks (liveness, readiness)
  • Exposing OpenAPI/Swagger documentation
  • Running applications on the local machine with Quarkus Maven plugin
  • Testing with JUnit and RestAssured
  • Deploying and running Quarkus applications on Minishift using source-2-image

1. Creating application – dependencies

When creating a new application you may execute a single Maven command that uses quarkus-maven-plugin. A list of dependencies should be declared in parameter -Dextensions.

mvn io.quarkus:quarkus-maven-plugin:0.21.1:create \
    -DprojectGroupId=pl.piomin.services \
    -DprojectArtifactId=employee-service \
    -DclassName="pl.piomin.services.employee.controller.EmployeeController" \
    -Dpath="/employees" \
    -Dextensions="resteasy-jackson, hibernate-validator"

Here’s the structure of our pom.xml:

<properties>
	<quarkus.version>0.21.1</quarkus.version>
	<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
	<maven.compiler.source>11</maven.compiler.source>
	<maven.compiler.target>11</maven.compiler.target>
</properties>
<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-bom</artifactId>
			<version>${quarkus.version}</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>
<build>
	<plugins>
		<plugin>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-maven-plugin</artifactId>
			<version>${quarkus.version}</version>
			<executions>
				<execution>
					<goals>
						<goal>build</goal>
					</goals>
				</execution>
			</executions>
		</plugin>
	</plugins>
</build>

For building simple REST application with input validation we don’t need many modules. As you have probably noticed I declared just two extensions, which is same as the following list of dependencies in pom.xml:

<dependency>
	<groupId>io.quarkus</groupId>
	<artifactId>quarkus-resteasy-jackson</artifactId>
</dependency>
<dependency>
	<groupId>io.quarkus</groupId>
	<artifactId>quarkus-hibernate-validator</artifactId>
</dependency>

2. Creating application – code

What might be a bit surprising for Spring Boot or Micronaut users there is no main, runnable class with static <codemain method. A resource/controller class is defacto the main class. Quarkus resource/controller class and methods should be marked using annotations from javax.ws.rs library. Here’s the implementation of REST controller inside employee-service:

@Path("/employees")
@Produces(MediaType.APPLICATION_JSON)
public class EmployeeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);
	
	@Inject
	EmployeeRepository repository;
	
	@POST
	public Employee add(@Valid Employee employee) {
		LOGGER.info("Employee add: {}", employee);
		return repository.add(employee);
	}
	
	@Path("/{id}")
	@GET
	public Employee findById(@PathParam("id") Long id) {
		LOGGER.info("Employee find: id={}", id);
		return repository.findById(id);
	}

	@GET
	public Set<Employee> findAll() {
		LOGGER.info("Employee find");
		return repository.findAll();
	}
	
	@Path("/department/{departmentId}")
	@GET
	public Set<Employee> findByDepartment(@PathParam("departmentId") Long departmentId) {
		LOGGER.info("Employee find: departmentId={}", departmentId);
		return repository.findByDepartment(departmentId);
	}
	
	@Path("/organization/{organizationId}")
	@GET
	public Set<Employee> findByOrganization(@PathParam("organizationId") Long organizationId) {
		LOGGER.info("Employee find: organizationId={}", organizationId);
		return repository.findByOrganization(organizationId);
	}
	
}

We use CDI for dependency injection and SLF4J for logging. Controller class uses in-memory repository bean for storing and retrieving data. Repository bean is annotated with CDI @ApplicationScoped and injected into the controller:

@ApplicationScoped
public class EmployeeRepository {

	private Set<Employee> employees = new HashSet<>();

	public EmployeeRepository() {
		add(new Employee(1L, 1L, "John Smith", 30, "Developer"));
		add(new Employee(1L, 1L, "Paul Walker", 40, "Architect"));
	}

	public Employee add(Employee employee) {
		employee.setId((long) (employees.size()+1));
		employees.add(employee);
		return employee;
	}
	
	public Employee findById(Long id) {
		Optional<Employee> employee = employees.stream().filter(a -> a.getId().equals(id)).findFirst();
		if (employee.isPresent())
			return employee.get();
		else
			return null;
	}

	public Set<Employee> findAll() {
		return employees;
	}
	
	public Set<Employee> findByDepartment(Long departmentId) {
		return employees.stream().filter(a -> a.getDepartmentId().equals(departmentId)).collect(Collectors.toSet());
	}
	
	public Set<Employee> findByOrganization(Long organizationId) {
		return employees.stream().filter(a -> a.getOrganizationId().equals(organizationId)).collect(Collectors.toSet());
	}

}

And the last component is domain class with validation:

public class Employee {

	private Long id;
	@NotNull
	private Long organizationId;
	@NotNull
	private Long departmentId;
	@NotBlank
	private String name;
	@Min(1)
	@Max(100)
	private int age;
	@NotBlank
	private String position;
	
	// ... GETTERS AND SETTERS
	
}

3. Unit Testing

As for the most of popular Java frameworks unit testing with Quarkus is very simple. If you are testing REST-based web application you should include the following dependencies in your pom.xml:

<dependency>
	<groupId>io.quarkus</groupId>
	<artifactId>quarkus-junit5</artifactId>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>io.rest-assured</groupId>
	<artifactId>rest-assured</artifactId>
	<scope>test</scope>
</dependency>

Let’s analyze the test class from organization-service (our another microservice along with employee-service and department-service). A test class should be annotated with @QuarkusTest. We may inject other beans via @Inject annotation. The rest is typical for JUnit and RestAssured – we are testing the API methods exposed by the controller. Because we are using in-memory repository we don’t have to mock anything except inter-service communication (we discuss it later in that article). We have some positive scenarios for GET, POST methods and a single negative scenario that does not pass input validation (testInvalidAdd).

@QuarkusTest
public class OrganizationControllerTests {

    @Inject
    OrganizationRepository repository;

    @Test
    public void testFindAll() {
        given().when().get("/organizations").then().statusCode(200).body(notNullValue());
    }

    @Test
    public void testFindById() {
        Organization organization = new Organization("Test3", "Address3");
        organization = repository.add(organization);
        given().when().get("/organizations/{id}", organization.getId()).then().statusCode(200)
                .body("id", equalTo(organization.getId().intValue()))
                .body("name", equalTo(organization.getName()));
    }

    @Test
    public void testFindByIdWithDepartments() {
        given().when().get("/organizations/{id}/with-departments", 1L).then().statusCode(200)
                .body(notNullValue())
                .body("departments.size()", is(1));
    }

    @Test
    public void testAdd() {
        Organization organization = new Organization("Test5", "Address5");
        given().contentType("application/json").body(organization)
                .when().post("/organizations").then().statusCode(200)
                .body("id", notNullValue())
                .body("name", equalTo(organization.getName()));
    }

    @Test
    public void testInvalidAdd() {
        Organization organization = new Organization();
        given().contentType("application/json").body(organization).when().post("/organizations").then().statusCode(400);
    }

}

4. Inter-service communication

Since Quarkus is targeted for running on Kubernetes it does not provide any built-in support for third-party service discovery (for example through Consul or Netflix Eureka) and HTTP client integrated with this discovery. However, Quarkus provides dedicated client support for REST communication. To use it we first need to include the following dependency:

<dependency>
	<groupId>io.quarkus</groupId>
	<artifactId>quarkus-rest-client</artifactId>
</dependency>

Quarkus provides declarative REST client based on MicroProfile REST Client. You need to create an interface with required methods and annotate it with @RegisterRestClient. Other annotations are pretty the same as on the server side. Since you use @RegisterRestClient for marking Quarkus know that this interface is meant to be available for CDI injection as a REST Client.

@Path("/departments")
@RegisterRestClient
public interface DepartmentClient {

	@GET
	@Path("/organization/{organizationId}")
	@Produces(MediaType.APPLICATION_JSON)
	List<Department> findByOrganization(@PathParam("organizationId") Long organizationId);

	@GET
	@Path("/organization/{organizationId}/with-employees")
	@Produces(MediaType.APPLICATION_JSON)
	List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId);
	
}

Now, let’s take a look on the controller class inside organization-service. Together with @Inject we need to use @RestClient annotation to inject REST client bean properly. After that you can use interface methods to call endpoints exposed by other services.

@Path("/organizations")
@Produces(MediaType.APPLICATION_JSON)
public class OrganizationController {

	private static final Logger LOGGER = LoggerFactory.getLogger(OrganizationController.class);
	
	@Inject
	OrganizationRepository repository;
	@Inject
	@RestClient
	DepartmentClient departmentClient;
	@Inject
	@RestClient
	EmployeeClient employeeClient;
	
	// ... OTHER FIND METHODS

	@Path("/{id}/with-departments")
	@GET
	public Organization findByIdWithDepartments(@PathParam("id") Long id) {
		LOGGER.info("Organization find: id={}", id);
		Organization organization = repository.findById(id);
		organization.setDepartments(departmentClient.findByOrganization(organization.getId()));
		return organization;
	}
	
	@Path("/{id}/with-departments-and-employees")
	@GET
	public Organization findByIdWithDepartmentsAndEmployees(@PathParam("id") Long id) {
		LOGGER.info("Organization find: id={}", id);
		Organization organization = repository.findById(id);
		organization.setDepartments(departmentClient.findByOrganizationWithEmployees(organization.getId()));
		return organization;
	}
	
	@Path("/{id}/with-employees")
	@GET
	public Organization findByIdWithEmployees(@PathParam("id") Long id) {
		LOGGER.info("Organization find: id={}", id);
		Organization organization = repository.findById(id);
		organization.setEmployees(employeeClient.findByOrganization(organization.getId()));
		return organization;
	}
	
}

The last missing thing required for communication are the addresses of target services. We may provide them using field baseUri of @RegisterRestClient annotation. However, it seems that a better solution would be to place them inside application.properties. The name of property needs to contain fully qualified name of client interface and suffix mp-rest/url.

pl.piomin.services.organization.client.DepartmentClient/mp-rest/url=http://localhost:8090
pl.piomin.services.organization.client.EmployeeClient/mp-rest/url=http://localhost:8080

I have already mentioned about unit testing and inter-service communication in the previous section. To test API method that communicates with other applications we need to mock REST client. Here’s the sample of mock created for DepartmentClient. It should be visible only during the tests, so we have to place it inside src/test/java. If we annotate it with @Mock and @RestClient Quarkus automatically use this bean by default instead of declarative REST client defined inside src/main/java.

@Mock
@ApplicationScoped
@RestClient
public class MockDepartmentClient implements DepartmentClient {

    @Override
    public List<Department> findByOrganization(Long organizationId) {
        return Collections.singletonList(new Department("Test1"));
    }

    @Override
    public List<Department> findByOrganizationWithEmployees(Long organizationId) {
        return null;
    }

}

5. Monitoring and Documentation

We can easily expose health checks or API documentation with Quarkus. API documentation is built using OpenAPI/Swagger. Quarkus leverages libraries available within the project SmallRye. We should include the following dependencies to our pom.xml:

<dependency>
	<groupId>io.quarkus</groupId>
	<artifactId>quarkus-smallrye-openapi</artifactId>
</dependency>
<dependency>
	<groupId>io.quarkus</groupId>
	<artifactId>quarkus-smallrye-health</artifactId>
</dependency>

We can define two types of health checks: readiness and liveness. There are available under /health/ready and /health/live context paths. To expose them outside application we need to define a bean that implements MicroProfile HealthCheck interface. Readiness endpoint should be annotated with @Readiness, while liveness with @Liveness.

@ApplicationScoped
@Readiness
public class ReadinessHealthcheck implements HealthCheck {

    @Override
    public HealthCheckResponse call() {
        return HealthCheckResponse.named("Employee Health Check").up().build();
    }

}

To enable Swagger documentation we don’t need to do anything more than adding a dependency. Quarkus also provides built-in UI for Swagger. By default it is enabled on development mode, so if you are willing to use on the production you should add the line quarkus.swagger-ui.always-include=true to your application.properties file. Now, if run the application employee-service locally in development mode by executing Maven command mvn compile quarkus:dev you may view API specification available under URL http://localhost:8080/swagger-ui.

quarkus-swagger

Here’s my log from application startup. It prints listening port and list of loaded extensions.

quarkus-startup

6. Running Microservices on the Local Machine

Because we would like to run more than one application on the same machine we need to override their default HTTP listening port. While employee-service still running on the default 8080 port, other microservices use different ports as shown below.

department-service:
quarkus-port-department

organization-service:
quarkus-port-organization

Let’s test an inter-service communication from Swagger UI. I called endpoint GET /organizations/{id}/with-departments that calls endpoint GET /departments/organization/{organizationId} exposed by department-service. The result is visible on the below.

quarkus-communication

7. Running Microservices on OpenShift

We have already finished the implementation of our sample microservices architecture and run them on the local machine. Now, we can proceed to the last step and try to deploy these applications on Minishift. We have some different approaches when deploying Quarkus application on OpenShift. Today I’ll show you leverage the S2I build mechanism for that.
We are going to use Quarkus GraalVM Native S2I Builder. It is available on quai.io as quarkus/ubi-quarkus-native-s2i. Of course, before deploying our applications we need to start Minishift. Following Quarkus documentation GraalVM-based native build consumes much memory and CPU, so I decided to set 6GB and 4 cores for Minishift.

$ minishift start --vm-driver=virtualbox --memory=6G --cpus=4

Also, we need to modify source code of our application a little. As you probably remember we used JDK 11 for running them locally. Quarkus S2I builder supports only JDK 8, so we need to change it in our pom.xml. We also need to include a declaration of native profile as shown below:

<properties>
	<quarkus.version>0.21.1</quarkus.version>
	<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
	<maven.compiler.source>1.8</maven.compiler.source>
	<maven.compiler.target>1.8</maven.compiler.target>
</properties>
...
<profiles>
	<profile>
		<id>native</id>
		<activation>
			<property>
				<name>native</name>
			</property>
		</activation>
		<build>
			<plugins>
				<plugin>
					<groupId>io.quarkus</groupId>
					<artifactId>quarkus-maven-plugin</artifactId>
					<version>${quarkus.version}</version>
					<executions>
						<execution>
							<goals>
								<goal>native-image</goal>
							</goals>
							<configuration>
								<enableHttpUrlHandler>true</enableHttpUrlHandler>
							</configuration>
						</execution>
					</executions>
				</plugin>
				<plugin>
					<artifactId>maven-failsafe-plugin</artifactId>
					<version>2.22.1</version>
					<executions>
						<execution>
							<goals>
								<goal>integration-test</goal>
								<goal>verify</goal>
							</goals>
							<configuration>
								<systemProperties>
									<native.image.path>${project.build.directory}/${project.build.finalName}-runner</native.image.path>
								</systemProperties>
							</configuration>
						</execution>
					</executions>
				</plugin>
			</plugins>
		</build>
	</profile>
</profiles>

Two other changes should be performed inside application.properties file. We don’t have to override port number, since Minishift dynamically assign virtual IP for every pod. An inter-service communication is realized via OpenShift discovery, so we just need to set the name of service instead of localhost.

quarkus.swagger-ui.always-include=true
pl.piomin.services.organization.client.DepartmentClient/mp-rest/url=http://department:8080
pl.piomin.services.organization.client.EmployeeClient/mp-rest/url=http://employee:8080

Finally we may deploy our applications on Minishift. To do that you should execute the following commands using your oc client:

$ oc new-app quay.io/quarkus/ubi-quarkus-native-s2i:19.1.1~https://github.com/piomin/sample-quarkus-microservices.git#openshift --context-dir=employee --name=employee
$ oc new-app quay.io/quarkus/ubi-quarkus-native-s2i:19.1.1~https://github.com/piomin/sample-quarkus-microservices.git#openshift --context-dir=department --name=department
$ oc new-app quay.io/quarkus/ubi-quarkus-native-s2i:19.1.1~https://github.com/piomin/sample-quarkus-microservices.git#openshift --context-dir=organization --name=organization

As you can see the repository with applications source code is available on my GitHub account under address https://github.com/piomin/sample-quarkus-microservices.git. The version for running on Minishift has been shared within branch openshift. The version for running on the local machine is available on master branch. Because all the applications are stored within a single repository we need to define a parameter context-dir for every single deployment.
I was quite disappointed. Although setting more memory and CPU for minishift my builds have taken a very long time – about 25 minutes.

quarkus-builds

However, finally after a long wait all my applications has been deployed.

quarkus-openshift-overview

I exposed them outside Minishift by executing the commands visible below. They can be tested using OpenShift route available under DNS http://${APP_NAME}-myproject.192.168.99.100.nip.io.

$ oc expose svc employee
$ oc expose svc department
$ oc expose svc organization

Additionally you may enable readiness and liveness health checks on OpenShift, since they are disabled by default.

quarkus-health

Continuous Delivery with OpenShift and Jenkins: A/B Testing

One of the reason you could decide to use OpenShift instead of some other containerized platforms (for example Kubernetes) is out-of-the-box support for continuous delivery pipelines. Without proper tools the process of releasing software in your organization may be really time-consuming and painful. The quickness of that process becoming especially important if you deliver software to production frequently. Currently, the most popular use case for it is microservices-based architecture, where you have many small, independent applications.

Continue reading “Continuous Delivery with OpenShift and Jenkins: A/B Testing”

Elasticsearch with Spring Boot

Elasticsearch is a full-text search engine especially designed for working with large data sets. Following this description it is a natural choice to use it for storing and searching application logs. Together with Logstash and Kibana it is a part of powerful solution called Elastic Stack, that has already been described in some of my previous articles.
Keeping application logs is not the only one use case for Elasticsearch. It is often used as a secondary database for the application, that has primary relational database. Such an approach can be especially useful if you have to perform full-text search over large data set or just store many historical records that are no longer modified by the application. Of course there is always question about advantages and disadvantages of that approach.
When you are working with two different data sources that contain the same data, you have to first think about synchronization. You have several options. Depending on the relational database vendor, you can leverage binary or transaction logs, which contain the history of SQL updates. This approach requires some middleware that reads logs and then puts data to Elasticsearch. You can always move the whole responsibility to the database side (trigger) or into Elasticsearch side (JDBC plugins). Continue reading “Elasticsearch with Spring Boot”

Microservices Integration Tests with Hoverfly and Testcontainers

Building good integration tests of a system consisting of several microservices may be quite a challenge. Today I’m going to show you how to use such tools like Hoverfly and Testcontainers to implement such the tests. I have already written about Hoverfly in my previous articles, as well as about Testcontainers. If you are interested in some intro to these framework you may take a look on the following articles:

Today we will consider the system consisting of three microservices, where each microservice is developed by the different team. One of these microservices trip-management is integrating with two others: driver-management and passenger-management. The question is how to organize integration tests under these assumptions. In that case we can use one of interesting features provided by Hoverfly – an ability to run it as a remote proxy. What does it mean in practice? It is illustrated on the picture below. The same external instance of Hoverfly proxy is shared between all microservices during JUnit tests. Microservice driver-management and passenger-management are testing their own methods exposed for use by trip-management, but all the requests are sent through Hoverfly remote instance acts as a proxy. Hoverfly will capture all the requests and responses sent during the tests. On the other hand trip-management is also testing its methods, but the communication with other microservices is simulated by remote Hoverfly instance basing on previously captured HTTP traffic.

hoverfly-test-1.png

We will use Docker for running remote instance of Hoverfly proxy. We will also use Docker images of microservices during the tests. That’s why we need Testcontainers framework, which is responsible for running application container before starting integration tests. So, the first step is to build Docker image of driver-management and passenger-management microservices.

1. Building Docker Image

Assuming you have successfully installed Docker on your machine, and you have set environment variables DOCKER_HOST and DOCKER_CERT_PATH, you may use io.fabric:docker-maven-plugin for it. It is important to execute the build goal of that plugin just after package Maven phase, but before integration-test phase. Here’s the appropriate configuration inside Maven pom.xml.

<plugin>
	<groupId>io.fabric8</groupId>
	<artifactId>docker-maven-plugin</artifactId>
	<configuration>
		<images>
			
		</images>
	</configuration>
	<executions>
		<execution>
			<phase>pre-integration-test</phase>
			<goals>
				<goal>build</goal>
			</goals>
		</execution>
	</executions>
</plugin>

2. Application Integration Tests

Our integration tests should be run during integration-test phase, so they must not be executed during test, before building application fat jar and Docker image. Here’s the appropriate configuration with maven-surefire-plugin.

<plugin>
	<groupId>org.apache.maven.plugins</groupId>
	<artifactId>maven-surefire-plugin</artifactId>
	<configuration>
		<excludes>
			<exclude>pl.piomin.services.driver.contract.DriverControllerIntegrationTests</exclude>
		</excludes>
	</configuration>
	<executions>
		<execution>
			<id>integration-test</id>
			<goals>
				<goal>test</goal>
			</goals>
			<phase>integration-test</phase>
			<configuration>
				<excludes>
					<exclude>none</exclude>
				</excludes>
				<includes>
					<include>pl.piomin.services.driver.contract.DriverControllerIntegrationTests</include>
				</includes>
			</configuration>
		</execution>
	</executions>
</plugin>

3. Running Hoverfly

Before running any tests we need start instance of Hoverfly in proxy mode. To achieve it we use Hoverfly Docker image. Because Hoverfly has to forward requests to the downstream microservices by host name, we create Docker network and then run Hoverfly in this network.

$ docker network create tests
$ docker run -d --name hoverfly -p 8500:8500 -p 8888:8888 --network tests spectolabs/hoverfly

Hoverfly proxy is now available for me (I’m using Docker Toolbox) under address 192.168.99.100:8500. We can also take a look admin web console available under address http://192.168.99.100:8888. Under that address you can also access HTTP API, what is described later in the next section.

4. Including test dependencies

To enable Hoverfly and Testcontainers for our test we first need to include some dependencies to Maven pom.xml. Our sample application are built on top of Spring Boot, so we also include Spring Test project.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-test</artifactId>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>testcontainers</artifactId>
	<version>1.10.6</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>io.specto</groupId>
	<artifactId>hoverfly-java</artifactId>
	<version>0.11.1</version>
	<scope>test</scope>
</dependency>

5. Building integration tests on the provider site

Now, we can finally proceed to JUnit test implementation. Here’s the full source code of test for driver-management microservice, but some things needs to explained. Before running our test methods we first start Docker container of application using Testcontainers. We use GenericContainer annotated with @ClassRule for that. Testcontainers provides api for interaction with containers, so we can easily set target Docker network and container hostname. We will also wait until application container is ready for use by calling method waitingFor on GenericContainer.
The next step is to enable Hoverfly rule for our test. We will run it in capture mode. By default Hoverfly trying to start local proxy instance, that’s why we provide remote address of existing instance already started using Docker container.
The tests are pretty simple. We will call endpoints using Spring TestRestTemplate. Because the request must finally be proxied to application container we use its hostname as the target address. The whole traffic is captured by Hoverfly.

public class DriverControllerIntegrationTests {

    private TestRestTemplate template = new TestRestTemplate();

    @ClassRule
    public static GenericContainer appContainer = new GenericContainer<>("piomin/driver-management")
            .withCreateContainerCmdModifier(cmd -> cmd.withName("driver-management").withHostName("driver-management"))
            .withNetworkMode("tests")
            .withNetworkAliases("driver-management")
            .withExposedPorts(8080)
            .waitingFor(Wait.forHttp("/drivers"));

    @ClassRule
    public static HoverflyRule hoverflyRule = HoverflyRule
            .inCaptureMode("driver.json", HoverflyConfig.remoteConfigs().host("192.168.99.100"))
            .printSimulationData();

    @Test
    public void testFindNearestDriver() {
        Driver driver = template.getForObject("http://driver-management:8080/drivers/{locationX}/{locationY}", Driver.class, 40, 20);
        Assert.assertNotNull(driver);
        driver = template.getForObject("http://driver-management:8080/drivers/{locationX}/{locationY}", Driver.class, 10, 20);
        Assert.assertNotNull(driver);
    }

    @Test
    public void testUpdateDriver() {
        HttpHeaders headers = new HttpHeaders();
        headers.setContentType(MediaType.APPLICATION_JSON);
        DriverInput input = new DriverInput();
        input.setId(2L);
        input.setStatus(DriverStatus.UNAVAILABLE);
        HttpEntity<DriverInput> entity = new HttpEntity<>(input, headers);
        template.put("http://driver-management:8080/drivers", entity);
        input.setId(1L);
        input.setStatus(DriverStatus.AVAILABLE);
        entity = new HttpEntity<>(input, headers);
        template.put("http://driver-management:8080/drivers", entity);
    }

}

Now, you can execute the tests during application build using mvn clean verify command. The sample application source code is available on GitHub in repository sample-testing-microservices under branch remote.

6. Building integration tests on the consumer site

In the previous we have discussed the integration tests implemented on the consumer site. There are two microservices driver-management and passenger-management, that expose endpoints invoked by the third microservice trip-management. The traffic generated during the tests has already been captured by Hoverfly. It is very important thing in that sample, because each time you will build the newest version of microservice Hoverfly is refreshing the structure of previously recorded requests. Now, if we run the tests for consumer application (trip-management) it fully bases on the newest version of requests generated during tests by microservices on the provider site. You can check out the list of all requests captured by Hoverfly by calling endpoint http://192.168.99.100:8888/api/v2/simulation.
Here are the integration tests implemented inside trip-management. They are also use remote Hoverfly proxy instance. The only difference is in running mode, which is simulation. It tries to simulates requests sent to driver-management and passenger-management basing on the traffic captured by Hoverfly.

@SpringBootTest
@RunWith(SpringRunner.class)
@AutoConfigureMockMvc
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class TripIntegrationTests {

    ObjectMapper mapper = new ObjectMapper();

    @ClassRule
    public static HoverflyRule hoverflyRule = HoverflyRule
            .inSimulationMode(HoverflyConfig.remoteConfigs().host("192.168.99.100"))
            .printSimulationData();

    @Autowired
    MockMvc mockMvc;

    @Test
    public void test1CreateNewTrip() throws Exception {
        TripInput ti = new TripInput("test", 10, 20, "walker");
        mockMvc.perform(MockMvcRequestBuilders.post("/trips")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(ti)))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("NEW")))
                .andExpect(MockMvcResultMatchers.jsonPath("$.driverId", Matchers.any(Integer.class)));
    }

    @Test
    public void test2CancelTrip() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.put("/trips/cancel/1")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(new Trip())))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("IN_PROGRESS")))
                .andExpect(MockMvcResultMatchers.jsonPath("$.driverId", Matchers.any(Integer.class)));
    }

    @Test
    public void test3PayTrip() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.put("/trips/payment/1")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(new Trip())))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("PAYED")));
    }

}

Now, you can run command mvn clean verify on the root module. It runs the tests in the following order: driver-management, passenger-management and trip-management.

hoverfly-test-3

Testing Spring Boot Integration with Vault and Postgres using Testcontainers Framework

I have already written many articles, where I was using Docker containers for running some third-party solutions integrated with my sample applications. Building integration tests for such applications may not be an easy task without Docker containers. Especially, if our application integrates with databases, message brokers or some other popular tools. If you are planning to build such integration tests you should definitely take a look on Testcontainers (https://www.testcontainers.org/). Testcontainers is a Java library that supports JUnit tests, providing fast and lightweight way for running instances of common databases, Selenium web browsers, or anything else that can run in a Docker container. It provides modules for the most popular relational and NoSQL databases like Postgres, MySQL, Cassandra or Neo4j. It also allows to run popular products like Elasticsearch, Kafka, Nginx or HashiCorp’s Vault. Today I’m going to show you more advanced sample of JUnit tests that use Testcontainers to check out an integration between Spring Boot/Spring Cloud application, Postgres database and Vault. For the purposes of that example we will use the case described in one of my previous articles Secure Spring Cloud Microservices with Vault and Nomad. Let us recall that use case.
I described there how to use very interesting Vault feature called secret engines for generating database user credentials dynamically. I used Spring Cloud Vault module in my Spring Boot application to automatically integrate with that feature of Vault. The implemented mechanism is pretty easy. The application calls Vault secret engine before it tries to connect to Postgres database on startup. Vault is integrated with Postgres via secret engine, and that’s why it creates user with sufficient privileges on Postgres. Then, generated credentials are automatically injected into auto-configured Spring Boot properties used for connecting with database spring.datasource.username and spring.datasource.password. The following picture illustrates described solution.

testcontainers-1 (1).png

Ok, we know how it works, now the question is how to automatically test it. With Testcontainers it is possible with just a few lines of code.

1. Building application

Let’s begin from a short intro to the application code. It is very simple. Here’s the list of dependencies required for building application that exposes REST API, and integrates with Postgres and Vault.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-vault-config</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-vault-config-databases</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
	<groupId>org.postgresql</groupId>
	<artifactId>postgresql</artifactId>
	<version>42.2.5</version>
</dependency>

Application connects to Postgres, enables integration with Vault via Spring Cloud Vault, and automatically creates/updates tables on startup.

spring:
  application:
    name: callme-service
  cloud:
    vault:
      uri: http://192.168.99.100:8200
      token: ${VAULT_TOKEN}
      postgresql:
        enabled: true
        role: default
        backend: database
  datasource:
    url: jdbc:postgresql://192.168.99.100:5432/postgres
  jpa.hibernate.ddl-auto: update

It exposes the single endpoint. The following method is responsible for handling incoming requests. It just insert a record to database and return response with app name, version and id of inserted record.

@RestController
@RequestMapping("/callme")
public class CallmeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);
	
	@Autowired
	Optional<BuildProperties> buildProperties;
	@Autowired
	CallmeRepository repository;
	
	@GetMapping("/message/{message}")
	public String ping(@PathVariable("message") String message) {
		Callme c = repository.save(new Callme(message, new Date()));
		if (buildProperties.isPresent()) {
			BuildProperties infoProperties = buildProperties.get();
			LOGGER.info("Ping: name={}, version={}", infoProperties.getName(), infoProperties.getVersion());
			return infoProperties.getName() + ":" + infoProperties.getVersion() + ":" + c.getId();
		} else {
			return "callme-service:"  + c.getId();
		}
	}
	
}

2. Enabling Testcontainers

To enable Testcontainers for our project we need to include some dependencies to our Maven pom.xml. We have dedicated modules for Postgres and Vault. We also include Spring Boot Test dependency, because we would like to test the whole Spring Boot app.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-test</artifactId>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>vault</artifactId>
	<version>1.10.5</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>testcontainers</artifactId>
	<version>1.10.5</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>postgresql</artifactId>
	<version>1.10.5</version>
	<scope>test</scope>
</dependency>

3. Running Vault test container

Testcontainers framework supports JUnit 4/JUnit 5 and Spock. The Vault container can be started before tests if it is annotated with @Rule or @ClassRule. By default it uses version 0.7, but we can override it with newest version, which is 1.0.2. We also may set a root token, which is then required by Spring Cloud Vault for integration with Vault.

@ClassRule
public static VaultContainer vaultContainer = new VaultContainer<>("vault:1.0.2")
	.withVaultToken("123456")
	.withVaultPort(8200);

That root token can be overridden before starting JUnit test on the test class.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT, properties = {
    "spring.cloud.vault.token=123456"
})
public class CallmeTest { ... }

4. Running Postgres test container

As an alternative to @ClassRule, we can manually start the container in a @BeforeClass or @Before method in the test. With this approach you will also have to stop it manually in @AfterClass or @After method. We start Postgres container manually, because by default it is exposed on dynamically generated port, which need to be set for Spring Boot application before starting the test. The listen port is returned by method getFirstMappedPort invoked on PostgreSQLContainer.

private static PostgreSQLContainer postgresContainer = new PostgreSQLContainer()
	.withDatabaseName("postgres")
	.withUsername("postgres")
	.withPassword("postgres123");
	
@BeforeClass
public static void init() throws IOException, InterruptedException {
	postgresContainer.start();
	int port = postgresContainer.getFirstMappedPort();
	System.setProperty("spring.datasource.url", String.format("jdbc:postgresql://192.168.99.100:%d/postgres", postgresContainer.getFirstMappedPort()));
	// ...
}

@AfterClass
public static void shutdown() {
	postgresContainer.stop();
}

5. Integrating Vault and Postgres containers

Once we have succesfully started both Vault and Postgres containers, we need to integrate them via Vault secret engine. First, we need to enable database secret engine Vault. After that we must configure connection to Postgres. The last step is is to configure a role. A role is a logical name that maps to a policy used to generated those credentials. All these actions may be performed using Vault commands. You can launch command on Vault container using execInContainer method. Vault configuration commands should be executed just after Postgres container startup.

@BeforeClass
public static void init() throws IOException, InterruptedException {
	postgresContainer.start();
	int port = postgresContainer.getFirstMappedPort();
	System.setProperty("spring.datasource.url", String.format("jdbc:postgresql://192.168.99.100:%d/postgres", postgresContainer.getFirstMappedPort()));
	vaultContainer.execInContainer("vault", "secrets", "enable", "database");
	String url = String.format("connection_url=postgresql://{{username}}:{{password}}@192.168.99.100:%d?sslmode=disable", port);
	vaultContainer.execInContainer("vault", "write", "database/config/postgres", "plugin_name=postgresql-database-plugin", "allowed_roles=default", url, "username=postgres", "password=postgres123");
	vaultContainer.execInContainer("vault", "write", "database/roles/default", "db_name=postgres",
		"creation_statements=CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';GRANT SELECT, UPDATE, INSERT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";GRANT USAGE,  SELECT ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";",
		"default_ttl=1h", "max_ttl=24h");
}

6. Running application tests

Finally, we may run application tests. We just call the single endpoint exposed by the app using TestRestTemplate, and verify the output.

@Autowired
TestRestTemplate template;

@Test
public void test() {
	String res = template.getForObject("/callme/message/{message}", String.class, "Test");
	Assert.assertNotNull(res);
	Assert.assertTrue(res.endsWith("1"));
}

If you are interested what exactly happens during the test you can set a breakpoint inside test method and execute docker ps command manually.

testcontainers-2