Elasticsearch with Spring Boot

Elasticsearch is a full-text search engine especially designed for working with large data sets. Following this description it is a natural choice to use it for storing and searching application logs. Together with Logstash and Kibana it is a part of powerful solution called Elastic Stack, that has already been described in some of my previous articles.
Keeping application logs is not the only one use case for Elasticsearch. It is often used as a secondary database for the application, that has primary relational database. Such an approach can be especially useful if you have to perform full-text search over large data set or just store many historical records that are no longer modified by the application. Of course there is always question about advantages and disadvantages of that approach.
When you are working with two different data sources that contain the same data, you have to first think about synchronization. You have several options. Depending on the relational database vendor, you can leverage binary or transaction logs, which contain the history of SQL updates. This approach requires some middleware that reads logs and then puts data to Elasticsearch. You can always move the whole responsibility to the database side (trigger) or into Elasticsearch side (JDBC plugins).
No matter how you will import your data into Elasticsearch, you have to consider another problem. The data structure. You probably have data distributed between few tables in your relational database. If you would like to take an advantage of Elasticsearch you should store it as a single type. It forces you to keep redundant data, what results in larger disc space usage. Of course that effect is acceptable if the queries would work faster than equivalent queries in relational database.
Ok, let’s proceed to the example after that long introduction. Spring Boot provides an easy way to interact with Elasticsearch through Spring Data repositories.

1. Enabling Elasticsearch support

As is customary with Spring Boot we don’t have to provide provide any additional beans in the context to enable support for Elasticsearch. We just need to include the following dependency to our pom.xml:

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>

By default, application tries to connect with Elasticsearch on localhost. If we use another target URL we need to override it in configuration settings. Here’s the fragment of our application.yml file that overrides default cluster name and address to the address of Elasticsearch started on Docker container:

spring:
  data:
    elasticsearch:
      cluster-name: docker-cluster
      cluster-nodes: 192.168.99.100:9300

The health status of Elasticsearch connection may be exposed by the application through Spring Boot Actuator health endpoint. First, you need to include the following Maven dependency:

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

Healthcheck is enabled by default, and Elasticsearch check is auto-configured. However, this verification is performed via Elasticsearch Rest API client. In that case, we need to override property spring.elasticsearch.rest.uris responsible for setting address used by REST client:

spring:
  elasticsearch:
    rest:
      uris: http://192.168.99.100:9200

2. Running Elasticsearch

For our tests we need single node Elasticsearch instance running in development mode. As usual we will use Docker container. Here’s the command that starts Docker container and exposes it on ports 9200 and 9300.

$ docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.6.2

3. Building Spring Data Repositories

To enable Elasticsearch repositories we just need to annotate the main or configuration class with @EnableElasticsearchRepositories:

@SpringBootApplication
@EnableElasticsearchRepositories
public class SampleApplication { ... }

The next step is to create repository interface that extends CrudRepository. It provides some basic operations like save or findById. If you would like to have some additional find methods you should define new methods inside interface following Spring Data naming convention.

public interface EmployeeRepository extends CrudRepository<Employee, Long> {

    List<Employee> findByOrganizationName(String name);
    List<Employee> findByName(String name);

}

4. Building Document

Our relational structure of entities is flattened into the single Employee object that contains related objects (Organization, Department). You can compare this approach to creating view for group of related tables in RDBMS. In Spring Data Elasticsearch nomenclature a single object is stored as a document. So, you need annotate your object with @Document. You should also set the name of Elasticsearch target index, type and id. Additional mappings can be configured with @Field annotation.

@Document(indexName = "sample", type = "employee")
public class Employee {

    @Id
    private Long id;
    @Field(type = FieldType.Object)
    private Organization organization;
    @Field(type = FieldType.Object)
    private Department department;
    private String name;
    private int age;
    private String position;
	
    // Getters and Setters ...

}

5. Initial import

As I have mentioned in the preface the main reason you may decide to use Elasticsearch is need for working with large data. Therefore it is desirable to fill our test Elasticsearch node with many documents. If you would like to insert many documents in one step you should definitely use Bulk API. The bulk API makes it possible to perform many index/delete operations in a single API call. This can greatly increase the indexing speed.
The bulk operations may be performed with Spring Data ElasticsearchTemplate bean. It is also auto-configured on Spring Boot. Template provides bulkIndex method that takes a list of index queries as input parameter. Here’s the implementation of bean that insert sample test data on application startup:

public class SampleDataSet {

    private static final Logger LOGGER = LoggerFactory.getLogger(SampleDataSet.class);
    private static final String INDEX_NAME = "sample";
    private static final String INDEX_TYPE = "employee";

    @Autowired
    EmployeeRepository repository;
    @Autowired
    ElasticsearchTemplate template;

    @PostConstruct
    public void init() {
        for (int i = 0; i < 10000; i++) {
            bulk(i);
        }
    }

    public void bulk(int ii) {
        try {
            if (!template.indexExists(INDEX_NAME)) {
                template.createIndex(INDEX_NAME);
            }
            ObjectMapper mapper = new ObjectMapper();
            List<IndexQuery> queries = new ArrayList<>();
            List<Employee> employees = employees();
            for (Employee employee : employees) {
                IndexQuery indexQuery = new IndexQuery();
                indexQuery.setId(employee.getId().toString());
                indexQuery.setSource(mapper.writeValueAsString(employee));
                indexQuery.setIndexName(INDEX_NAME);
                indexQuery.setType(INDEX_TYPE);
                queries.add(indexQuery);
            }
            if (queries.size() > 0) {
                template.bulkIndex(queries);
            }
            template.refresh(INDEX_NAME);
            LOGGER.info("BulkIndex completed: {}", ii);
        } catch (Exception e) {
            LOGGER.error("Error bulk index", e);
        }
    }
	
	// sample data set implementation ...
	
}

If you don’t need to insert data on startup you can disable that process by setting property initial-import.enabled to false. Here’s declaration of SampleDataSet bean:

@Bean
@ConditionalOnProperty("initial-import.enabled")
public SampleDataSet dataSet() {
	return new SampleDataSet();
}

6. Viewing data and running queries

Assuming that you have already started the sample application, the bean responsible for bulking index were not disabled, and you were enough patience to wait some hours until all data has been inserted into your Elasticsearch node, now it contains 100M documents of employee type. It is worth to display some information about your cluster. You can do it using Elasticsearch queries or you can download one of available GUI tools, for example ElasticHQ. Fortunately, ElasticHQ is also available as a Docker container. You have to execute the following command to start container with ElasticHQ:

$ docker run -d --name elastichq -p 5000:5000 elastichq/elasticsearch-hq

After starting ElasticHQ GUI can be accessed via web browser on port 5000. Its web console provides basic information about cluster, index and allows to perform queries. You only need to put Elasticsearch node address and you will be redirected into the main dashboard with statistics. Here’s main dashboard of ElasticHQ.

elastic-3

As you can see we have a single index called sample divided into 5 shards. That is the default value provided by Spring Data @Document, which can be overridden with field shards. We can navigate to index management panel after clicking on it. You can perform some operations on index like clear cache or refresh index. You can also take a look on statistics for all shards.

elastic-4

For the current test purposes, I have around 25M (around ~3GB of space) documents of Employee type. We can execute some test queries. I have exposed two endpoints for searching: by employee name GET /employees/{name} and by organization name GET /employees/organization/{organizationName}. The results are not overwhelming. I think we could have the same results for relational database using the same amount of data.

elastic-2

7. Testing

Ok, we have already finished development and performed some manual tests on the large data set. Now, it’s a time to create some integration tests running on built time. We can use the library that allows to automatically start Docker containers with databases during JUnit tests – Testcontainers. For more about this library you may refer to its site https://www.testcontainers.org or to one of my previous articles: Testing Spring Boot Integration with Vault and Postgres using Testcontainers Framework. Fortunately, Testcontainers supports Elasticsearch. To enable it on test scope you first need to include the following dependency to your pom.xml:

<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>elasticsearch</artifactId>
	<version>1.11.1</version>
	<scope>test</scope>
</dependency>

The next step is to define @ClassRule or @Rule bean that points to Elasticsearch container. It is automatically started before test class or before each depending on the annotation you use. The exposed port number is generated automatically so you need to retrieve it set as value for spring.data.elasticsearch.cluster-nodes property. Here’s the full implementation of our JUnit integration test:

@RunWith(SpringRunner.class)
@SpringBootTest
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class EmployeeRepositoryTest {

    @ClassRule
    public static ElasticsearchContainer container = new ElasticsearchContainer();
    @Autowired
    EmployeeRepository repository;

    @BeforeClass
    public static void before() {
        System.setProperty("spring.data.elasticsearch.cluster-nodes", container.getContainerIpAddress() + ":" + container.getMappedPort(9300));
    }

    @Test
    public void testAdd() {
        Employee employee = new Employee();
        employee.setId(1L);
        employee.setName("John Smith");
        employee.setAge(33);
        employee.setPosition("Developer");
        employee.setDepartment(new Department(1L, "TestD"));
        employee.setOrganization(new Organization(1L, "TestO", "Test Street No. 1"));
        employee = repository.save(employee);
        Assert.assertNotNull(employee);
    }

    @Test
    public void testFindAll() {
        Iterable<Employee> employees = repository.findAll();
        Assert.assertTrue(employees.iterator().hasNext());
    }

    @Test
    public void testFindByOrganization() {
        List<Employee> employees = repository.findByOrganizationName("TestO");
        Assert.assertTrue(employees.size() > 0);
    }

    @Test
    public void testFindByName() {
        List<Employee> employees = repository.findByName("John Smith");
        Assert.assertTrue(employees.size() > 0);
    }

}

Summary

In this article you have learned how to:

  • Run your local instance of Elasticsearch with Docker
  • Integrate Spring Boot application with Elasticsearch
  • Use Spring Data Repositories for saving data and performing simple queries
  • User Spring Data ElasticsearchTemplate to perform bulk operations on index
  • Use ElasticHQ for monitoring your cluster
  • Build automatic integration tests for Elasticsearch with Testcontainers

The sample application source code is as usual available on GitHub in repository sample-spring-elasticsearch.

Advertisements

Microservices Integration Tests with Hoverfly and Testcontainers

Building good integration tests of a system consisting of several microservices may be quite a challenge. Today I’m going to show you how to use such tools like Hoverfly and Testcontainers to implement such the tests. I have already written about Hoverfly in my previous articles, as well as about Testcontainers. If you are interested in some intro to these framework you may take a look on the following articles:

Today we will consider the system consisting of three microservices, where each microservice is developed by the different team. One of these microservices trip-management is integrating with two others: driver-management and passenger-management. The question is how to organize integration tests under these assumptions. In that case we can use one of interesting features provided by Hoverfly – an ability to run it as a remote proxy. What does it mean in practice? It is illustrated on the picture below. The same external instance of Hoverfly proxy is shared between all microservices during JUnit tests. Microservice driver-management and passenger-management are testing their own methods exposed for use by trip-management, but all the requests are sent through Hoverfly remote instance acts as a proxy. Hoverfly will capture all the requests and responses sent during the tests. On the other hand trip-management is also testing its methods, but the communication with other microservices is simulated by remote Hoverfly instance basing on previously captured HTTP traffic.

hoverfly-test-1.png

We will use Docker for running remote instance of Hoverfly proxy. We will also use Docker images of microservices during the tests. That’s why we need Testcontainers framework, which is responsible for running application container before starting integration tests. So, the first step is to build Docker image of driver-management and passenger-management microservices.

1. Building Docker Image

Assuming you have successfully installed Docker on your machine, and you have set environment variables DOCKER_HOST and DOCKER_CERT_PATH, you may use io.fabric:docker-maven-plugin for it. It is important to execute the build goal of that plugin just after package Maven phase, but before integration-test phase. Here’s the appropriate configuration inside Maven pom.xml.

<plugin>
	<groupId>io.fabric8</groupId>
	<artifactId>docker-maven-plugin</artifactId>
	<configuration>
		<images>
			
		</images>
	</configuration>
	<executions>
		<execution>
			<phase>pre-integration-test</phase>
			<goals>
				<goal>build</goal>
			</goals>
		</execution>
	</executions>
</plugin>

2. Application Integration Tests

Our integration tests should be run during integration-test phase, so they must not be executed during test, before building application fat jar and Docker image. Here’s the appropriate configuration with maven-surefire-plugin.

<plugin>
	<groupId>org.apache.maven.plugins</groupId>
	<artifactId>maven-surefire-plugin</artifactId>
	<configuration>
		<excludes>
			<exclude>pl.piomin.services.driver.contract.DriverControllerIntegrationTests</exclude>
		</excludes>
	</configuration>
	<executions>
		<execution>
			<id>integration-test</id>
			<goals>
				<goal>test</goal>
			</goals>
			<phase>integration-test</phase>
			<configuration>
				<excludes>
					<exclude>none</exclude>
				</excludes>
				<includes>
					<include>pl.piomin.services.driver.contract.DriverControllerIntegrationTests</include>
				</includes>
			</configuration>
		</execution>
	</executions>
</plugin>

3. Running Hoverfly

Before running any tests we need start instance of Hoverfly in proxy mode. To achieve it we use Hoverfly Docker image. Because Hoverfly has to forward requests to the downstream microservices by host name, we create Docker network and then run Hoverfly in this network.

$ docker network create tests
$ docker run -d --name hoverfly -p 8500:8500 -p 8888:8888 --network tests spectolabs/hoverfly

Hoverfly proxy is now available for me (I’m using Docker Toolbox) under address 192.168.99.100:8500. We can also take a look admin web console available under address http://192.168.99.100:8888. Under that address you can also access HTTP API, what is described later in the next section.

4. Including test dependencies

To enable Hoverfly and Testcontainers for our test we first need to include some dependencies to Maven pom.xml. Our sample application are built on top of Spring Boot, so we also include Spring Test project.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-test</artifactId>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>testcontainers</artifactId>
	<version>1.10.6</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>io.specto</groupId>
	<artifactId>hoverfly-java</artifactId>
	<version>0.11.1</version>
	<scope>test</scope>
</dependency>

5. Building integration tests on the provider site

Now, we can finally proceed to JUnit test implementation. Here’s the full source code of test for driver-management microservice, but some things needs to explained. Before running our test methods we first start Docker container of application using Testcontainers. We use GenericContainer annotated with @ClassRule for that. Testcontainers provides api for interaction with containers, so we can easily set target Docker network and container hostname. We will also wait until application container is ready for use by calling method waitingFor on GenericContainer.
The next step is to enable Hoverfly rule for our test. We will run it in capture mode. By default Hoverfly trying to start local proxy instance, that’s why we provide remote address of existing instance already started using Docker container.
The tests are pretty simple. We will call endpoints using Spring TestRestTemplate. Because the request must finally be proxied to application container we use its hostname as the target address. The whole traffic is captured by Hoverfly.

public class DriverControllerIntegrationTests {

    private TestRestTemplate template = new TestRestTemplate();

    @ClassRule
    public static GenericContainer appContainer = new GenericContainer<>("piomin/driver-management")
            .withCreateContainerCmdModifier(cmd -> cmd.withName("driver-management").withHostName("driver-management"))
            .withNetworkMode("tests")
            .withNetworkAliases("driver-management")
            .withExposedPorts(8080)
            .waitingFor(Wait.forHttp("/drivers"));

    @ClassRule
    public static HoverflyRule hoverflyRule = HoverflyRule
            .inCaptureMode("driver.json", HoverflyConfig.remoteConfigs().host("192.168.99.100"))
            .printSimulationData();

    @Test
    public void testFindNearestDriver() {
        Driver driver = template.getForObject("http://driver-management:8080/drivers/{locationX}/{locationY}", Driver.class, 40, 20);
        Assert.assertNotNull(driver);
        driver = template.getForObject("http://driver-management:8080/drivers/{locationX}/{locationY}", Driver.class, 10, 20);
        Assert.assertNotNull(driver);
    }

    @Test
    public void testUpdateDriver() {
        HttpHeaders headers = new HttpHeaders();
        headers.setContentType(MediaType.APPLICATION_JSON);
        DriverInput input = new DriverInput();
        input.setId(2L);
        input.setStatus(DriverStatus.UNAVAILABLE);
        HttpEntity<DriverInput> entity = new HttpEntity<>(input, headers);
        template.put("http://driver-management:8080/drivers", entity);
        input.setId(1L);
        input.setStatus(DriverStatus.AVAILABLE);
        entity = new HttpEntity<>(input, headers);
        template.put("http://driver-management:8080/drivers", entity);
    }

}

Now, you can execute the tests during application build using mvn clean verify command. The sample application source code is available on GitHub in repository sample-testing-microservices under branch remote.

6. Building integration tests on the consumer site

In the previous we have discussed the integration tests implemented on the consumer site. There are two microservices driver-management and passenger-management, that expose endpoints invoked by the third microservice trip-management. The traffic generated during the tests has already been captured by Hoverfly. It is very important thing in that sample, because each time you will build the newest version of microservice Hoverfly is refreshing the structure of previously recorded requests. Now, if we run the tests for consumer application (trip-management) it fully bases on the newest version of requests generated during tests by microservices on the provider site. You can check out the list of all requests captured by Hoverfly by calling endpoint http://192.168.99.100:8888/api/v2/simulation.
Here are the integration tests implemented inside trip-management. They are also use remote Hoverfly proxy instance. The only difference is in running mode, which is simulation. It tries to simulates requests sent to driver-management and passenger-management basing on the traffic captured by Hoverfly.

@SpringBootTest
@RunWith(SpringRunner.class)
@AutoConfigureMockMvc
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class TripIntegrationTests {

    ObjectMapper mapper = new ObjectMapper();

    @ClassRule
    public static HoverflyRule hoverflyRule = HoverflyRule
            .inSimulationMode(HoverflyConfig.remoteConfigs().host("192.168.99.100"))
            .printSimulationData();

    @Autowired
    MockMvc mockMvc;

    @Test
    public void test1CreateNewTrip() throws Exception {
        TripInput ti = new TripInput("test", 10, 20, "walker");
        mockMvc.perform(MockMvcRequestBuilders.post("/trips")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(ti)))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("NEW")))
                .andExpect(MockMvcResultMatchers.jsonPath("$.driverId", Matchers.any(Integer.class)));
    }

    @Test
    public void test2CancelTrip() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.put("/trips/cancel/1")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(new Trip())))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("IN_PROGRESS")))
                .andExpect(MockMvcResultMatchers.jsonPath("$.driverId", Matchers.any(Integer.class)));
    }

    @Test
    public void test3PayTrip() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.put("/trips/payment/1")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(new Trip())))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("PAYED")));
    }

}

Now, you can run command mvn clean verify on the root module. It runs the tests in the following order: driver-management, passenger-management and trip-management.

hoverfly-test-3

Testing Spring Boot Integration with Vault and Postgres using Testcontainers Framework

I have already written many articles, where I was using Docker containers for running some third-party solutions integrated with my sample applications. Building integration tests for such applications may not be an easy task without Docker containers. Especially, if our application integrates with databases, message brokers or some other popular tools. If you are planning to build such integration tests you should definitely take a look on Testcontainers (https://www.testcontainers.org/). Testcontainers is a Java library that supports JUnit tests, providing fast and lightweight way for running instances of common databases, Selenium web browsers, or anything else that can run in a Docker container. It provides modules for the most popular relational and NoSQL databases like Postgres, MySQL, Cassandra or Neo4j. It also allows to run popular products like Elasticsearch, Kafka, Nginx or HashiCorp’s Vault. Today I’m going to show you more advanced sample of JUnit tests that use Testcontainers to check out an integration between Spring Boot/Spring Cloud application, Postgres database and Vault. For the purposes of that example we will use the case described in one of my previous articles Secure Spring Cloud Microservices with Vault and Nomad. Let us recall that use case.
I described there how to use very interesting Vault feature called secret engines for generating database user credentials dynamically. I used Spring Cloud Vault module in my Spring Boot application to automatically integrate with that feature of Vault. The implemented mechanism is pretty easy. The application calls Vault secret engine before it tries to connect to Postgres database on startup. Vault is integrated with Postgres via secret engine, and that’s why it creates user with sufficient privileges on Postgres. Then, generated credentials are automatically injected into auto-configured Spring Boot properties used for connecting with database spring.datasource.username and spring.datasource.password. The following picture illustrates described solution.

testcontainers-1 (1).png

Ok, we know how it works, now the question is how to automatically test it. With Testcontainers it is possible with just a few lines of code.

1. Building application

Let’s begin from a short intro to the application code. It is very simple. Here’s the list of dependencies required for building application that exposes REST API, and integrates with Postgres and Vault.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-vault-config</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-vault-config-databases</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
	<groupId>org.postgresql</groupId>
	<artifactId>postgresql</artifactId>
	<version>42.2.5</version>
</dependency>

Application connects to Postgres, enables integration with Vault via Spring Cloud Vault, and automatically creates/updates tables on startup.

spring:
  application:
    name: callme-service
  cloud:
    vault:
      uri: http://192.168.99.100:8200
      token: ${VAULT_TOKEN}
      postgresql:
        enabled: true
        role: default
        backend: database
  datasource:
    url: jdbc:postgresql://192.168.99.100:5432/postgres
  jpa.hibernate.ddl-auto: update

It exposes the single endpoint. The following method is responsible for handling incoming requests. It just insert a record to database and return response with app name, version and id of inserted record.

@RestController
@RequestMapping("/callme")
public class CallmeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);
	
	@Autowired
	Optional<BuildProperties> buildProperties;
	@Autowired
	CallmeRepository repository;
	
	@GetMapping("/message/{message}")
	public String ping(@PathVariable("message") String message) {
		Callme c = repository.save(new Callme(message, new Date()));
		if (buildProperties.isPresent()) {
			BuildProperties infoProperties = buildProperties.get();
			LOGGER.info("Ping: name={}, version={}", infoProperties.getName(), infoProperties.getVersion());
			return infoProperties.getName() + ":" + infoProperties.getVersion() + ":" + c.getId();
		} else {
			return "callme-service:"  + c.getId();
		}
	}
	
}

2. Enabling Testcontainers

To enable Testcontainers for our project we need to include some dependencies to our Maven pom.xml. We have dedicated modules for Postgres and Vault. We also include Spring Boot Test dependency, because we would like to test the whole Spring Boot app.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-test</artifactId>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>vault</artifactId>
	<version>1.10.5</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>testcontainers</artifactId>
	<version>1.10.5</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>postgresql</artifactId>
	<version>1.10.5</version>
	<scope>test</scope>
</dependency>

3. Running Vault test container

Testcontainers framework supports JUnit 4/JUnit 5 and Spock. The Vault container can be started before tests if it is annotated with @Rule or @ClassRule. By default it uses version 0.7, but we can override it with newest version, which is 1.0.2. We also may set a root token, which is then required by Spring Cloud Vault for integration with Vault.

@ClassRule
public static VaultContainer vaultContainer = new VaultContainer<>("vault:1.0.2")
	.withVaultToken("123456")
	.withVaultPort(8200);

That root token can be overridden before starting JUnit test on the test class.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT, properties = {
    "spring.cloud.vault.token=123456"
})
public class CallmeTest { ... }

4. Running Postgres test container

As an alternative to @ClassRule, we can manually start the container in a @BeforeClass or @Before method in the test. With this approach you will also have to stop it manually in @AfterClass or @After method. We start Postgres container manually, because by default it is exposed on dynamically generated port, which need to be set for Spring Boot application before starting the test. The listen port is returned by method getFirstMappedPort invoked on PostgreSQLContainer.

private static PostgreSQLContainer postgresContainer = new PostgreSQLContainer()
	.withDatabaseName("postgres")
	.withUsername("postgres")
	.withPassword("postgres123");
	
@BeforeClass
public static void init() throws IOException, InterruptedException {
	postgresContainer.start();
	int port = postgresContainer.getFirstMappedPort();
	System.setProperty("spring.datasource.url", String.format("jdbc:postgresql://192.168.99.100:%d/postgres", postgresContainer.getFirstMappedPort()));
	// ...
}

@AfterClass
public static void shutdown() {
	postgresContainer.stop();
}

5. Integrating Vault and Postgres containers

Once we have succesfully started both Vault and Postgres containers, we need to integrate them via Vault secret engine. First, we need to enable database secret engine Vault. After that we must configure connection to Postgres. The last step is is to configure a role. A role is a logical name that maps to a policy used to generated those credentials. All these actions may be performed using Vault commands. You can launch command on Vault container using execInContainer method. Vault configuration commands should be executed just after Postgres container startup.

@BeforeClass
public static void init() throws IOException, InterruptedException {
	postgresContainer.start();
	int port = postgresContainer.getFirstMappedPort();
	System.setProperty("spring.datasource.url", String.format("jdbc:postgresql://192.168.99.100:%d/postgres", postgresContainer.getFirstMappedPort()));
	vaultContainer.execInContainer("vault", "secrets", "enable", "database");
	String url = String.format("connection_url=postgresql://{{username}}:{{password}}@192.168.99.100:%d?sslmode=disable", port);
	vaultContainer.execInContainer("vault", "write", "database/config/postgres", "plugin_name=postgresql-database-plugin", "allowed_roles=default", url, "username=postgres", "password=postgres123");
	vaultContainer.execInContainer("vault", "write", "database/roles/default", "db_name=postgres",
		"creation_statements=CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';GRANT SELECT, UPDATE, INSERT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";GRANT USAGE,  SELECT ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";",
		"default_ttl=1h", "max_ttl=24h");
}

6. Running application tests

Finally, we may run application tests. We just call the single endpoint exposed by the app using TestRestTemplate, and verify the output.

@Autowired
TestRestTemplate template;

@Test
public void test() {
	String res = template.getForObject("/callme/message/{message}", String.class, "Test");
	Assert.assertNotNull(res);
	Assert.assertTrue(res.endsWith("1"));
}

If you are interested what exactly happens during the test you can set a breakpoint inside test method and execute docker ps command manually.

testcontainers-2