Elasticsearch with Spring Boot

Elasticsearch is a full-text search engine especially designed for working with large data sets. Following this description it is a natural choice to use it for storing and searching application logs. Together with Logstash and Kibana it is a part of powerful solution called Elastic Stack, that has already been described in some of my previous articles.
Keeping application logs is not the only one use case for Elasticsearch. It is often used as a secondary database for the application, that has primary relational database. Such an approach can be especially useful if you have to perform full-text search over large data set or just store many historical records that are no longer modified by the application. Of course there is always question about advantages and disadvantages of that approach.
When you are working with two different data sources that contain the same data, you have to first think about synchronization. You have several options. Depending on the relational database vendor, you can leverage binary or transaction logs, which contain the history of SQL updates. This approach requires some middleware that reads logs and then puts data to Elasticsearch. You can always move the whole responsibility to the database side (trigger) or into Elasticsearch side (JDBC plugins).
No matter how you will import your data into Elasticsearch, you have to consider another problem. The data structure. You probably have data distributed between few tables in your relational database. If you would like to take an advantage of Elasticsearch you should store it as a single type. It forces you to keep redundant data, what results in larger disc space usage. Of course that effect is acceptable if the queries would work faster than equivalent queries in relational database.
Ok, let’s proceed to the example after that long introduction. Spring Boot provides an easy way to interact with Elasticsearch through Spring Data repositories.

1. Enabling Elasticsearch support

As is customary with Spring Boot we don’t have to provide provide any additional beans in the context to enable support for Elasticsearch. We just need to include the following dependency to our pom.xml:

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>

By default, application tries to connect with Elasticsearch on localhost. If we use another target URL we need to override it in configuration settings. Here’s the fragment of our application.yml file that overrides default cluster name and address to the address of Elasticsearch started on Docker container:

spring:
  data:
    elasticsearch:
      cluster-name: docker-cluster
      cluster-nodes: 192.168.99.100:9300

The health status of Elasticsearch connection may be exposed by the application through Spring Boot Actuator health endpoint. First, you need to include the following Maven dependency:

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

Healthcheck is enabled by default, and Elasticsearch check is auto-configured. However, this verification is performed via Elasticsearch Rest API client. In that case, we need to override property spring.elasticsearch.rest.uris responsible for setting address used by REST client:

spring:
  elasticsearch:
    rest:
      uris: http://192.168.99.100:9200

2. Running Elasticsearch

For our tests we need single node Elasticsearch instance running in development mode. As usual we will use Docker container. Here’s the command that starts Docker container and exposes it on ports 9200 and 9300.

$ docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.6.2

3. Building Spring Data Repositories

To enable Elasticsearch repositories we just need to annotate the main or configuration class with @EnableElasticsearchRepositories:

@SpringBootApplication
@EnableElasticsearchRepositories
public class SampleApplication { ... }

The next step is to create repository interface that extends CrudRepository. It provides some basic operations like save or findById. If you would like to have some additional find methods you should define new methods inside interface following Spring Data naming convention.

public interface EmployeeRepository extends CrudRepository<Employee, Long> {

    List<Employee> findByOrganizationName(String name);
    List<Employee> findByName(String name);

}

4. Building Document

Our relational structure of entities is flattened into the single Employee object that contains related objects (Organization, Department). You can compare this approach to creating view for group of related tables in RDBMS. In Spring Data Elasticsearch nomenclature a single object is stored as a document. So, you need annotate your object with @Document. You should also set the name of Elasticsearch target index, type and id. Additional mappings can be configured with @Field annotation.

@Document(indexName = "sample", type = "employee")
public class Employee {

    @Id
    private Long id;
    @Field(type = FieldType.Object)
    private Organization organization;
    @Field(type = FieldType.Object)
    private Department department;
    private String name;
    private int age;
    private String position;
	
    // Getters and Setters ...

}

5. Initial import

As I have mentioned in the preface the main reason you may decide to use Elasticsearch is need for working with large data. Therefore it is desirable to fill our test Elasticsearch node with many documents. If you would like to insert many documents in one step you should definitely use Bulk API. The bulk API makes it possible to perform many index/delete operations in a single API call. This can greatly increase the indexing speed.
The bulk operations may be performed with Spring Data ElasticsearchTemplate bean. It is also auto-configured on Spring Boot. Template provides bulkIndex method that takes a list of index queries as input parameter. Here’s the implementation of bean that insert sample test data on application startup:

public class SampleDataSet {

    private static final Logger LOGGER = LoggerFactory.getLogger(SampleDataSet.class);
    private static final String INDEX_NAME = "sample";
    private static final String INDEX_TYPE = "employee";

    @Autowired
    EmployeeRepository repository;
    @Autowired
    ElasticsearchTemplate template;

    @PostConstruct
    public void init() {
        for (int i = 0; i < 10000; i++) {
            bulk(i);
        }
    }

    public void bulk(int ii) {
        try {
            if (!template.indexExists(INDEX_NAME)) {
                template.createIndex(INDEX_NAME);
            }
            ObjectMapper mapper = new ObjectMapper();
            List<IndexQuery> queries = new ArrayList<>();
            List<Employee> employees = employees();
            for (Employee employee : employees) {
                IndexQuery indexQuery = new IndexQuery();
                indexQuery.setId(employee.getId().toString());
                indexQuery.setSource(mapper.writeValueAsString(employee));
                indexQuery.setIndexName(INDEX_NAME);
                indexQuery.setType(INDEX_TYPE);
                queries.add(indexQuery);
            }
            if (queries.size() > 0) {
                template.bulkIndex(queries);
            }
            template.refresh(INDEX_NAME);
            LOGGER.info("BulkIndex completed: {}", ii);
        } catch (Exception e) {
            LOGGER.error("Error bulk index", e);
        }
    }
	
	// sample data set implementation ...
	
}

If you don’t need to insert data on startup you can disable that process by setting property initial-import.enabled to false. Here’s declaration of SampleDataSet bean:

@Bean
@ConditionalOnProperty("initial-import.enabled")
public SampleDataSet dataSet() {
	return new SampleDataSet();
}

6. Viewing data and running queries

Assuming that you have already started the sample application, the bean responsible for bulking index were not disabled, and you were enough patience to wait some hours until all data has been inserted into your Elasticsearch node, now it contains 100M documents of employee type. It is worth to display some information about your cluster. You can do it using Elasticsearch queries or you can download one of available GUI tools, for example ElasticHQ. Fortunately, ElasticHQ is also available as a Docker container. You have to execute the following command to start container with ElasticHQ:

$ docker run -d --name elastichq -p 5000:5000 elastichq/elasticsearch-hq

After starting ElasticHQ GUI can be accessed via web browser on port 5000. Its web console provides basic information about cluster, index and allows to perform queries. You only need to put Elasticsearch node address and you will be redirected into the main dashboard with statistics. Here’s main dashboard of ElasticHQ.

elastic-3

As you can see we have a single index called sample divided into 5 shards. That is the default value provided by Spring Data @Document, which can be overridden with field shards. We can navigate to index management panel after clicking on it. You can perform some operations on index like clear cache or refresh index. You can also take a look on statistics for all shards.

elastic-4

For the current test purposes, I have around 25M (around ~3GB of space) documents of Employee type. We can execute some test queries. I have exposed two endpoints for searching: by employee name GET /employees/{name} and by organization name GET /employees/organization/{organizationName}. The results are not overwhelming. I think we could have the same results for relational database using the same amount of data.

elastic-2

7. Testing

Ok, we have already finished development and performed some manual tests on the large data set. Now, it’s a time to create some integration tests running on built time. We can use the library that allows to automatically start Docker containers with databases during JUnit tests – Testcontainers. For more about this library you may refer to its site https://www.testcontainers.org or to one of my previous articles: Testing Spring Boot Integration with Vault and Postgres using Testcontainers Framework. Fortunately, Testcontainers supports Elasticsearch. To enable it on test scope you first need to include the following dependency to your pom.xml:

<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>elasticsearch</artifactId>
	<version>1.11.1</version>
	<scope>test</scope>
</dependency>

The next step is to define @ClassRule or @Rule bean that points to Elasticsearch container. It is automatically started before test class or before each depending on the annotation you use. The exposed port number is generated automatically so you need to retrieve it set as value for spring.data.elasticsearch.cluster-nodes property. Here’s the full implementation of our JUnit integration test:

@RunWith(SpringRunner.class)
@SpringBootTest
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class EmployeeRepositoryTest {

    @ClassRule
    public static ElasticsearchContainer container = new ElasticsearchContainer();
    @Autowired
    EmployeeRepository repository;

    @BeforeClass
    public static void before() {
        System.setProperty("spring.data.elasticsearch.cluster-nodes", container.getContainerIpAddress() + ":" + container.getMappedPort(9300));
    }

    @Test
    public void testAdd() {
        Employee employee = new Employee();
        employee.setId(1L);
        employee.setName("John Smith");
        employee.setAge(33);
        employee.setPosition("Developer");
        employee.setDepartment(new Department(1L, "TestD"));
        employee.setOrganization(new Organization(1L, "TestO", "Test Street No. 1"));
        employee = repository.save(employee);
        Assert.assertNotNull(employee);
    }

    @Test
    public void testFindAll() {
        Iterable<Employee> employees = repository.findAll();
        Assert.assertTrue(employees.iterator().hasNext());
    }

    @Test
    public void testFindByOrganization() {
        List<Employee> employees = repository.findByOrganizationName("TestO");
        Assert.assertTrue(employees.size() > 0);
    }

    @Test
    public void testFindByName() {
        List<Employee> employees = repository.findByName("John Smith");
        Assert.assertTrue(employees.size() > 0);
    }

}

Summary

In this article you have learned how to:

  • Run your local instance of Elasticsearch with Docker
  • Integrate Spring Boot application with Elasticsearch
  • Use Spring Data Repositories for saving data and performing simple queries
  • User Spring Data ElasticsearchTemplate to perform bulk operations on index
  • Use ElasticHQ for monitoring your cluster
  • Build automatic integration tests for Elasticsearch with Testcontainers

The sample application source code is as usual available on GitHub in repository sample-spring-elasticsearch.

Advertisements

Microservices Integration Tests with Hoverfly and Testcontainers

Building good integration tests of a system consisting of several microservices may be quite a challenge. Today I’m going to show you how to use such tools like Hoverfly and Testcontainers to implement such the tests. I have already written about Hoverfly in my previous articles, as well as about Testcontainers. If you are interested in some intro to these framework you may take a look on the following articles:

Today we will consider the system consisting of three microservices, where each microservice is developed by the different team. One of these microservices trip-management is integrating with two others: driver-management and passenger-management. The question is how to organize integration tests under these assumptions. In that case we can use one of interesting features provided by Hoverfly – an ability to run it as a remote proxy. What does it mean in practice? It is illustrated on the picture below. The same external instance of Hoverfly proxy is shared between all microservices during JUnit tests. Microservice driver-management and passenger-management are testing their own methods exposed for use by trip-management, but all the requests are sent through Hoverfly remote instance acts as a proxy. Hoverfly will capture all the requests and responses sent during the tests. On the other hand trip-management is also testing its methods, but the communication with other microservices is simulated by remote Hoverfly instance basing on previously captured HTTP traffic.

hoverfly-test-1.png

We will use Docker for running remote instance of Hoverfly proxy. We will also use Docker images of microservices during the tests. That’s why we need Testcontainers framework, which is responsible for running application container before starting integration tests. So, the first step is to build Docker image of driver-management and passenger-management microservices.

1. Building Docker Image

Assuming you have successfully installed Docker on your machine, and you have set environment variables DOCKER_HOST and DOCKER_CERT_PATH, you may use io.fabric:docker-maven-plugin for it. It is important to execute the build goal of that plugin just after package Maven phase, but before integration-test phase. Here’s the appropriate configuration inside Maven pom.xml.

<plugin>
	<groupId>io.fabric8</groupId>
	<artifactId>docker-maven-plugin</artifactId>
	<configuration>
		<images>
			
		</images>
	</configuration>
	<executions>
		<execution>
			<phase>pre-integration-test</phase>
			<goals>
				<goal>build</goal>
			</goals>
		</execution>
	</executions>
</plugin>

2. Application Integration Tests

Our integration tests should be run during integration-test phase, so they must not be executed during test, before building application fat jar and Docker image. Here’s the appropriate configuration with maven-surefire-plugin.

<plugin>
	<groupId>org.apache.maven.plugins</groupId>
	<artifactId>maven-surefire-plugin</artifactId>
	<configuration>
		<excludes>
			<exclude>pl.piomin.services.driver.contract.DriverControllerIntegrationTests</exclude>
		</excludes>
	</configuration>
	<executions>
		<execution>
			<id>integration-test</id>
			<goals>
				<goal>test</goal>
			</goals>
			<phase>integration-test</phase>
			<configuration>
				<excludes>
					<exclude>none</exclude>
				</excludes>
				<includes>
					<include>pl.piomin.services.driver.contract.DriverControllerIntegrationTests</include>
				</includes>
			</configuration>
		</execution>
	</executions>
</plugin>

3. Running Hoverfly

Before running any tests we need start instance of Hoverfly in proxy mode. To achieve it we use Hoverfly Docker image. Because Hoverfly has to forward requests to the downstream microservices by host name, we create Docker network and then run Hoverfly in this network.

$ docker network create tests
$ docker run -d --name hoverfly -p 8500:8500 -p 8888:8888 --network tests spectolabs/hoverfly

Hoverfly proxy is now available for me (I’m using Docker Toolbox) under address 192.168.99.100:8500. We can also take a look admin web console available under address http://192.168.99.100:8888. Under that address you can also access HTTP API, what is described later in the next section.

4. Including test dependencies

To enable Hoverfly and Testcontainers for our test we first need to include some dependencies to Maven pom.xml. Our sample application are built on top of Spring Boot, so we also include Spring Test project.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-test</artifactId>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>testcontainers</artifactId>
	<version>1.10.6</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>io.specto</groupId>
	<artifactId>hoverfly-java</artifactId>
	<version>0.11.1</version>
	<scope>test</scope>
</dependency>

5. Building integration tests on the provider site

Now, we can finally proceed to JUnit test implementation. Here’s the full source code of test for driver-management microservice, but some things needs to explained. Before running our test methods we first start Docker container of application using Testcontainers. We use GenericContainer annotated with @ClassRule for that. Testcontainers provides api for interaction with containers, so we can easily set target Docker network and container hostname. We will also wait until application container is ready for use by calling method waitingFor on GenericContainer.
The next step is to enable Hoverfly rule for our test. We will run it in capture mode. By default Hoverfly trying to start local proxy instance, that’s why we provide remote address of existing instance already started using Docker container.
The tests are pretty simple. We will call endpoints using Spring TestRestTemplate. Because the request must finally be proxied to application container we use its hostname as the target address. The whole traffic is captured by Hoverfly.

public class DriverControllerIntegrationTests {

    private TestRestTemplate template = new TestRestTemplate();

    @ClassRule
    public static GenericContainer appContainer = new GenericContainer<>("piomin/driver-management")
            .withCreateContainerCmdModifier(cmd -> cmd.withName("driver-management").withHostName("driver-management"))
            .withNetworkMode("tests")
            .withNetworkAliases("driver-management")
            .withExposedPorts(8080)
            .waitingFor(Wait.forHttp("/drivers"));

    @ClassRule
    public static HoverflyRule hoverflyRule = HoverflyRule
            .inCaptureMode("driver.json", HoverflyConfig.remoteConfigs().host("192.168.99.100"))
            .printSimulationData();

    @Test
    public void testFindNearestDriver() {
        Driver driver = template.getForObject("http://driver-management:8080/drivers/{locationX}/{locationY}", Driver.class, 40, 20);
        Assert.assertNotNull(driver);
        driver = template.getForObject("http://driver-management:8080/drivers/{locationX}/{locationY}", Driver.class, 10, 20);
        Assert.assertNotNull(driver);
    }

    @Test
    public void testUpdateDriver() {
        HttpHeaders headers = new HttpHeaders();
        headers.setContentType(MediaType.APPLICATION_JSON);
        DriverInput input = new DriverInput();
        input.setId(2L);
        input.setStatus(DriverStatus.UNAVAILABLE);
        HttpEntity<DriverInput> entity = new HttpEntity<>(input, headers);
        template.put("http://driver-management:8080/drivers", entity);
        input.setId(1L);
        input.setStatus(DriverStatus.AVAILABLE);
        entity = new HttpEntity<>(input, headers);
        template.put("http://driver-management:8080/drivers", entity);
    }

}

Now, you can execute the tests during application build using mvn clean verify command. The sample application source code is available on GitHub in repository sample-testing-microservices under branch remote.

6. Building integration tests on the consumer site

In the previous we have discussed the integration tests implemented on the consumer site. There are two microservices driver-management and passenger-management, that expose endpoints invoked by the third microservice trip-management. The traffic generated during the tests has already been captured by Hoverfly. It is very important thing in that sample, because each time you will build the newest version of microservice Hoverfly is refreshing the structure of previously recorded requests. Now, if we run the tests for consumer application (trip-management) it fully bases on the newest version of requests generated during tests by microservices on the provider site. You can check out the list of all requests captured by Hoverfly by calling endpoint http://192.168.99.100:8888/api/v2/simulation.
Here are the integration tests implemented inside trip-management. They are also use remote Hoverfly proxy instance. The only difference is in running mode, which is simulation. It tries to simulates requests sent to driver-management and passenger-management basing on the traffic captured by Hoverfly.

@SpringBootTest
@RunWith(SpringRunner.class)
@AutoConfigureMockMvc
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class TripIntegrationTests {

    ObjectMapper mapper = new ObjectMapper();

    @ClassRule
    public static HoverflyRule hoverflyRule = HoverflyRule
            .inSimulationMode(HoverflyConfig.remoteConfigs().host("192.168.99.100"))
            .printSimulationData();

    @Autowired
    MockMvc mockMvc;

    @Test
    public void test1CreateNewTrip() throws Exception {
        TripInput ti = new TripInput("test", 10, 20, "walker");
        mockMvc.perform(MockMvcRequestBuilders.post("/trips")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(ti)))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("NEW")))
                .andExpect(MockMvcResultMatchers.jsonPath("$.driverId", Matchers.any(Integer.class)));
    }

    @Test
    public void test2CancelTrip() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.put("/trips/cancel/1")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(new Trip())))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("IN_PROGRESS")))
                .andExpect(MockMvcResultMatchers.jsonPath("$.driverId", Matchers.any(Integer.class)));
    }

    @Test
    public void test3PayTrip() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.put("/trips/payment/1")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(new Trip())))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("PAYED")));
    }

}

Now, you can run command mvn clean verify on the root module. It runs the tests in the following order: driver-management, passenger-management and trip-management.

hoverfly-test-3

Testing Spring Boot Integration with Vault and Postgres using Testcontainers Framework

I have already written many articles, where I was using Docker containers for running some third-party solutions integrated with my sample applications. Building integration tests for such applications may not be an easy task without Docker containers. Especially, if our application integrates with databases, message brokers or some other popular tools. If you are planning to build such integration tests you should definitely take a look on Testcontainers (https://www.testcontainers.org/). Testcontainers is a Java library that supports JUnit tests, providing fast and lightweight way for running instances of common databases, Selenium web browsers, or anything else that can run in a Docker container. It provides modules for the most popular relational and NoSQL databases like Postgres, MySQL, Cassandra or Neo4j. It also allows to run popular products like Elasticsearch, Kafka, Nginx or HashiCorp’s Vault. Today I’m going to show you more advanced sample of JUnit tests that use Testcontainers to check out an integration between Spring Boot/Spring Cloud application, Postgres database and Vault. For the purposes of that example we will use the case described in one of my previous articles Secure Spring Cloud Microservices with Vault and Nomad. Let us recall that use case.
I described there how to use very interesting Vault feature called secret engines for generating database user credentials dynamically. I used Spring Cloud Vault module in my Spring Boot application to automatically integrate with that feature of Vault. The implemented mechanism is pretty easy. The application calls Vault secret engine before it tries to connect to Postgres database on startup. Vault is integrated with Postgres via secret engine, and that’s why it creates user with sufficient privileges on Postgres. Then, generated credentials are automatically injected into auto-configured Spring Boot properties used for connecting with database spring.datasource.username and spring.datasource.password. The following picture illustrates described solution.

testcontainers-1 (1).png

Ok, we know how it works, now the question is how to automatically test it. With Testcontainers it is possible with just a few lines of code.

1. Building application

Let’s begin from a short intro to the application code. It is very simple. Here’s the list of dependencies required for building application that exposes REST API, and integrates with Postgres and Vault.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-vault-config</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-vault-config-databases</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
	<groupId>org.postgresql</groupId>
	<artifactId>postgresql</artifactId>
	<version>42.2.5</version>
</dependency>

Application connects to Postgres, enables integration with Vault via Spring Cloud Vault, and automatically creates/updates tables on startup.

spring:
  application:
    name: callme-service
  cloud:
    vault:
      uri: http://192.168.99.100:8200
      token: ${VAULT_TOKEN}
      postgresql:
        enabled: true
        role: default
        backend: database
  datasource:
    url: jdbc:postgresql://192.168.99.100:5432/postgres
  jpa.hibernate.ddl-auto: update

It exposes the single endpoint. The following method is responsible for handling incoming requests. It just insert a record to database and return response with app name, version and id of inserted record.

@RestController
@RequestMapping("/callme")
public class CallmeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);
	
	@Autowired
	Optional<BuildProperties> buildProperties;
	@Autowired
	CallmeRepository repository;
	
	@GetMapping("/message/{message}")
	public String ping(@PathVariable("message") String message) {
		Callme c = repository.save(new Callme(message, new Date()));
		if (buildProperties.isPresent()) {
			BuildProperties infoProperties = buildProperties.get();
			LOGGER.info("Ping: name={}, version={}", infoProperties.getName(), infoProperties.getVersion());
			return infoProperties.getName() + ":" + infoProperties.getVersion() + ":" + c.getId();
		} else {
			return "callme-service:"  + c.getId();
		}
	}
	
}

2. Enabling Testcontainers

To enable Testcontainers for our project we need to include some dependencies to our Maven pom.xml. We have dedicated modules for Postgres and Vault. We also include Spring Boot Test dependency, because we would like to test the whole Spring Boot app.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-test</artifactId>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>vault</artifactId>
	<version>1.10.5</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>testcontainers</artifactId>
	<version>1.10.5</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>postgresql</artifactId>
	<version>1.10.5</version>
	<scope>test</scope>
</dependency>

3. Running Vault test container

Testcontainers framework supports JUnit 4/JUnit 5 and Spock. The Vault container can be started before tests if it is annotated with @Rule or @ClassRule. By default it uses version 0.7, but we can override it with newest version, which is 1.0.2. We also may set a root token, which is then required by Spring Cloud Vault for integration with Vault.

@ClassRule
public static VaultContainer vaultContainer = new VaultContainer<>("vault:1.0.2")
	.withVaultToken("123456")
	.withVaultPort(8200);

That root token can be overridden before starting JUnit test on the test class.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT, properties = {
    "spring.cloud.vault.token=123456"
})
public class CallmeTest { ... }

4. Running Postgres test container

As an alternative to @ClassRule, we can manually start the container in a @BeforeClass or @Before method in the test. With this approach you will also have to stop it manually in @AfterClass or @After method. We start Postgres container manually, because by default it is exposed on dynamically generated port, which need to be set for Spring Boot application before starting the test. The listen port is returned by method getFirstMappedPort invoked on PostgreSQLContainer.

private static PostgreSQLContainer postgresContainer = new PostgreSQLContainer()
	.withDatabaseName("postgres")
	.withUsername("postgres")
	.withPassword("postgres123");
	
@BeforeClass
public static void init() throws IOException, InterruptedException {
	postgresContainer.start();
	int port = postgresContainer.getFirstMappedPort();
	System.setProperty("spring.datasource.url", String.format("jdbc:postgresql://192.168.99.100:%d/postgres", postgresContainer.getFirstMappedPort()));
	// ...
}

@AfterClass
public static void shutdown() {
	postgresContainer.stop();
}

5. Integrating Vault and Postgres containers

Once we have succesfully started both Vault and Postgres containers, we need to integrate them via Vault secret engine. First, we need to enable database secret engine Vault. After that we must configure connection to Postgres. The last step is is to configure a role. A role is a logical name that maps to a policy used to generated those credentials. All these actions may be performed using Vault commands. You can launch command on Vault container using execInContainer method. Vault configuration commands should be executed just after Postgres container startup.

@BeforeClass
public static void init() throws IOException, InterruptedException {
	postgresContainer.start();
	int port = postgresContainer.getFirstMappedPort();
	System.setProperty("spring.datasource.url", String.format("jdbc:postgresql://192.168.99.100:%d/postgres", postgresContainer.getFirstMappedPort()));
	vaultContainer.execInContainer("vault", "secrets", "enable", "database");
	String url = String.format("connection_url=postgresql://{{username}}:{{password}}@192.168.99.100:%d?sslmode=disable", port);
	vaultContainer.execInContainer("vault", "write", "database/config/postgres", "plugin_name=postgresql-database-plugin", "allowed_roles=default", url, "username=postgres", "password=postgres123");
	vaultContainer.execInContainer("vault", "write", "database/roles/default", "db_name=postgres",
		"creation_statements=CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';GRANT SELECT, UPDATE, INSERT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";GRANT USAGE,  SELECT ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";",
		"default_ttl=1h", "max_ttl=24h");
}

6. Running application tests

Finally, we may run application tests. We just call the single endpoint exposed by the app using TestRestTemplate, and verify the output.

@Autowired
TestRestTemplate template;

@Test
public void test() {
	String res = template.getForObject("/callme/message/{message}", String.class, "Test");
	Assert.assertNotNull(res);
	Assert.assertTrue(res.endsWith("1"));
}

If you are interested what exactly happens during the test you can set a breakpoint inside test method and execute docker ps command manually.

testcontainers-2

Running Java Microservices on OpenShift using Source-2-Image

One of the reason you would prefer OpenShift instead of Kubernetes is the simplicity of running new applications. When working with plain Kubernetes you need to provide already built image together with the set of descriptor templates used for deploying it. OpenShift introduces Source-2-Image feature used for building reproducible Docker images from application source code. With S2I you don’t have provide any Kubernetes YAML templates or build Docker image by yourself, OpenShift will do it for you. Let’s see how it works. The best way to test it locally is via Minishift. But the first step is to prepare sample applications source code.

1. Prepare application code

I have already described how to run your Java applications on Kubernetes in one of my previous articles Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker. We will use the same source code as used in that article now, so you would be able to compare those two different approaches. Our source code is available on GitHub in repository sample-spring-microservices-new. We will modify a little the version used in Kubernetes by removing Spring Cloud Kubernetes library and including some additional resources. The current version is available in the branch openshift.
Our sample system consists of three microservices which communicate with each other and use Mongo database backend. Here’s the diagram that illustrates our architecture.

s2i-1

Every microservice is a Spring Boot application, which uses Maven as a built tool. After including spring-boot-maven-plugin it is able to generate single fat jar with all dependencies, which is required by source-2-image builder.

<build>
	<plugins>
		<plugin>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-maven-plugin</artifactId>
		</plugin>
	</plugins>
</build>

Every application includes starters for Spring Web, Spring Actuator and Spring Data MongoDB for integration with Mongo database. We will also include libraries for generating Swagger API documentation, and Spring Cloud OpenFeign for these applications which call REST endpoints exposed by other microservices.

<dependencies>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-web</artifactId>
	</dependency>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-actuator</artifactId>
	</dependency>
	<dependency>
		<groupId>io.springfox</groupId>
		<artifactId>springfox-swagger2</artifactId>
		<version>2.9.2>/version<
	</dependency>
	<dependency>
		<groupId>io.springfox</groupId>
		<artifactId>springfox-swagger-ui</artifactId>
		<version>2.9.2</version>
	</dependency>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-data-mongodb</artifactId>
	</dependency>
</dependencies>

Every Spring Boot application exposes REST API for simple CRUD operations on a given resource. The Spring Data repository bean is injected into the controller.

@RestController
@RequestMapping(“/employee”)
public class EmployeeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);
	
	@Autowired
	EmployeeRepository repository;
	
	@PostMapping("/")
	public Employee add(@RequestBody Employee employee) {
		LOGGER.info("Employee add: {}", employee);
		return repository.save(employee);
	}
	
	@GetMapping("/{id}")
	public Employee findById(@PathVariable("id") String id) {
		LOGGER.info("Employee find: id={}", id);
		return repository.findById(id).get();
	}
	
	@GetMapping("/")
	public Iterable<Employee> findAll() {
		LOGGER.info("Employee find");
		return repository.findAll();
	}
	
	@GetMapping("/department/{departmentId}")
	public List<Employee> findByDepartment(@PathVariable("departmentId") Long departmentId) {
		LOGGER.info("Employee find: departmentId={}", departmentId);
		return repository.findByDepartmentId(departmentId);
	}
	
	@GetMapping("/organization/{organizationId}")
	public List<Employee> findByOrganization(@PathVariable("organizationId") Long organizationId) {
		LOGGER.info("Employee find: organizationId={}", organizationId);
		return repository.findByOrganizationId(organizationId);
	}
	
}

The application expects to have environment variables pointing to the database name, user and password.

spring:
  application:
    name: employee
  data:
    mongodb:
      uri: mongodb://${MONGO_DATABASE_USER}:${MONGO_DATABASE_PASSWORD}@mongodb/${MONGO_DATABASE_NAME}

Inter-service communication is realized through OpenFeign declarative REST client. It is included in department and organization microservices.

@FeignClient(name = "employee", url = "${microservices.employee.url}")
public interface EmployeeClient {

	@GetMapping("/employee/organization/{organizationId}")
	List<Employee> findByOrganization(@PathVariable("organizationId") String organizationId);
	
}

The address of the target service accessed by Feign client is set inside application.yml. The communication is realized via OpenShift/Kubernetes services. The name of each service is also injected through an environment variable.

spring:
  application:
    name: organization
  data:
    mongodb:
      uri: mongodb://${MONGO_DATABASE_USER}:${MONGO_DATABASE_PASSWORD}@mongodb/${MONGO_DATABASE_NAME}
microservices:
  employee:
    url: http://${EMPLOYEE_SERVICE}:8080
  department:
    url: http://${DEPARTMENT_SERVICE}:8080

2. Running Minishift

To run Minishift locally you just have to download it from that site, copy minishift.exe (for Windows) to your PATH directory and start using minishift start command. For more details you may refer to my previous article about OpenShift and Java applications Quick guide to deploying Java apps on OpenShift. The current version of Minishift used during writing this article is 1.29.0.
After starting Minishift we need to run some additional oc commands to enable source-2-image for Java apps. First, we add some privileges to user admin to be able to access project openshift. In this project OpenShift stores all the build-in templates and image streams used, for example as S2I builders. Let’s begin from enable admin-user addon.

$ minishift addons apply admin-user

Thanks to that plugin we are able to login to Minishift as cluster admin. Now, we can grant role cluster-admin to user admin.

$ oc login -u system:admin
$ oc adm policy add-cluster-role-to-user cluster-admin admin
$ oc login -u admin -p admin

After that, you can login to web console using credentials admin/admin. You should be able to see project openshift. It is not all. The image used for building runnable Java apps (openjdk18-openshift) is not available by default on Minishift. We can import it manually from RedHat registry using oc import-image command or just enable and apply plugin xpaas. I prefer the second option.

$ minishift addons apply xpaas

Now, you can go to Minishift web console (for me available under address https://192.168.99.100:8443), select project openshift and then navigate to Builds -> Images. You should see the image stream redhat-openjdk18-openshift on the list.

s2i-2

The newest version of that image is 1.3. Surprisingly it is not the newest version on OpenShift Container Platform. There you have version 1.5. However, the newest versions of builder images has been moved to registry.redhat.io, which requires authentication.

3. Deploying Java app using S2I

We are finally able to deploy our app on Minishift with S2I builder. The application source code is ready, and the same with Minishift instance. The first step is to deploy an instance of MongoDB. It is very easy with OpenShift, because Mongo template is available in built-in service catalog. We can provide our own configuration settings or left default values. What’s important for us, OpenShift generates secret, by default available under the name mongodb.

s2i-3

The S2I builder image provided by OpenShift may be used by through the image stream redhat-openjdk18-openshift. This image is intended for use with Maven-based Java standalone projects that are run via main class, for example Spring Boot applications. If you would not provide any builder during creating new app the type of application is auto-detected by OpenShift, and source code written Java it will be jee deployed on WildFly server. The current version of the Java S2I builder image supports OpenJDK 1.8, Jolokia 1.3.5, and Maven 3.3.9-2.8.
Let’s create our first application on OpenShift. We begin from microservice employee. Under normal circumstances each microservice would be located in separated Git repository. In our sample all of them are placed in the single repository, so we have provide the location of current app by setting parameter --context-dir. We will also override default branch to openshift, which has been created for the purposes of this article.

$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=employee --context-dir=employee-service

All our microservices are connecting to Mongo database, so we also have to inject connection settings and credentials into application pod. It can achieved by injecting mongodb secret to BuildConfig object.

$ oc set env bc/employee --from="secret/mongodb" --prefix=MONGO_

BuildConfig is one of the OpenShift object created after running command oc new-app. It also creates DeploymentConfig with deployment definition, Service, and ImageStream with newest Docker image of application. After creating application a new build is running. First, it download source code from Git repository, then it builds it using Maven, assembles build results into the Docker image, and finally saves image in registry.
Now, we can create the next application – department. For simplification, all three microservices are connecting to the same database, which is not recommended under normal circumstances. In that case the only difference between department and employee app is the environment variable EMPLOYEE_SERVICE set as parameter on oc new-app command.

$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=department --context-dir=department-service -e EMPLOYEE_SERVICE=employee 

The same as before we also inject mongodb secret into BuildConfig object.

$ oc set env bc/department --from="secret/mongodb" --prefix=MONGO_

A build is starting just after creating a new application, but we can also start it manually by executing the following running command.

$ oc start-build department

Finally, we are deploying the last microservice. Here are the appropriate commands.

$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=organization --context-dir=organization-service -e EMPLOYEE_SERVICE=employee -e DEPARTMENT_SERVICE=department
$ oc set env bc/organization --from="secret/mongodb" --prefix=MONGO_

4. Deep look into created OpenShift objects

The list of builds may be displayed on web console under section Builds -> Builds. As you can see on the picture below there are three BuildConfig objects available – each one for the single application. The same list can be displayed using oc command oc get bc.

s2i-4

You can take a look on build history by selecting one of the element from the list. You can also start a new by clicking button Start Build as shown below.

s2i-5

We can always display YAML configuration file with BuildConfig definition. But it is also possible to perform the similar action using web console. The following picture shows the list of environment variables injected from mongodb secret into the BuildConfig object.

s2i-6.PNG

Every build generates Docker image with application and saves it in Minishift internal registry. Minishift internal registry is available under address 172.30.1.1:5000. The list of available image streams is available under section Builds -> Images.

s2i-7

Every application is automatically exposed on ports 8080 (HTTP), 8443 (HTTPS) and 8778 (Jolokia) via services. You can also expose these services outside Minishift by creating OpenShift Route using oc expose command.

s2i-8

5. Testing the sample system

To proceed with the tests we should first expose our microservices outside Minishift. To do that just run the following commands.

$ oc expose svc employee
$ oc expose svc department
$ oc expose svc organization

After that we can access applications on the address http://${APP_NAME}-${PROJ_NAME}.${MINISHIFT_IP}.nip.io as shown below.

s2i-9

Each microservice provides Swagger2 API documentation available on page swagger-ui.html. Thanks to that we can easily test every single endpoint exposed by the service.

s2i-10

It’s worth notice that every application making use of three approaches to inject environment variables into the pod:

  1. It stores version number in source code repository inside the file .s2i/environment. S2I builder reads all the properties defined inside that file and set them as environment variables for builder pod, and then application pod. Our property name is VERSION, which is injected using Spring @Value, and set for Swagger API (the code is visible below).
  2. I have already set the names of dependent services as ENV vars during executing command oc new-app for department and organization apps.
  3. I have also inject MongoDB secret into every BuildConfig object using oc set env command.
@Value("${VERSION}")
String version;

public static void main(String[] args) {
	SpringApplication.run(DepartmentApplication.class, args);
}

@Bean
public Docket swaggerApi() {
	return new Docket(DocumentationType.SWAGGER_2)
		.select()
			.apis(RequestHandlerSelectors.basePackage("pl.piomin.services.department.controller"))
			.paths(PathSelectors.any())
		.build()
		.apiInfo(new ApiInfoBuilder().version(version).title("Department API").description("Documentation Department API v" + version).build());
}

Conclusion

Today I show you that deploying your applications on OpenShift may be very simple thing. You don’t have to create any YAML descriptor files or build Docker images by yourself to run your app. It is built directly from your source code. You can compare it with deployment on Kubernetes described in one of my previous articles Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker.

RabbitMQ Cluster with Consul and Vault

Almost two years ago I wrote an article about RabbitMQ clustering RabbitMQ in cluster. It was one of the first post on my blog, and it’s really hard to believe it has been two years since I started this blog. Anyway, one of the question about the topic described in the mentioned article inspired me to return to that subject one more time. That question pointed to the problem of an approach to setting up the cluster. This approach assumes that we are manually attaching new nodes to the cluster by executing the command rabbitmqctl join_cluster with cluster name as a parameter. If I remember correctly it was the only one available method of creating cluster at that time. Today we have more choices, what illustrates an evolution of RabbitMQ during last two years. RabbitMQ cluster can be formed in a number of ways:

  • Manually with rabbitmqctl (as described in my article RabbitMQ in cluster)
  • Declaratively by listing cluster nodes in config file
  • Using DNS-based discovery
  • Using AWS (EC2) instance discovery via a dedicated plugin
  • Using Kubernetes discovery via a dedicated plugin
  • Using Consul discovery via a dedicated plugin
  • Using etcd-based discovery via a dedicated plugin

Today, I’m going to show you how to create RabbitMQ cluster using service discovery based on HashiCorp’s Consul. Additionally, we will include Vault to our architecture in order to use its interesting feature called secrets engine for managing credentials used for accessing RabbitMQ. We will setup this sample on the local machine using Docker images of RabbitMQ, Consul and Vault. Finally, we will test our solution using simple Spring Boot application that sends and listens for incoming messages to the cluster. That application is available on GitHub repository sample-haclustered-rabbitmq-service in the branch consul.

Architecture

We use Vault as a credentials manager when applications try to authenticate against RabbitMQ node or user tries to login to RabbitMQ web admin console. Each RabbitMQ node registers itself after startup in Consul and retrieves list of nodes running inside a cluster. Vault is integrated with RabbitMQ using dedicated secrets engine. Here’s an architecture of our sample solution.

rabbit-consul-logo (1)

1. Configure RabbitMQ Consul plugin

The integration between RabbitMQ and Consul is realized via plugin rabbitmq-peer-discovery-consul. This plugin is not enabled by default on the official RabbitMQ Docker container. So, the first step is to build our own Docker image based on official RabbitMQ image that installs and enables required plugin. By default, RabbitMQ main configuration file is available under path /etc/rabbitmq/rabbitmq.conf inside Docker container. To override it we just use the COPY statement as shown below. The following Dockerfile definition takes RabbitMQ with management web console as base image and enabling rabbitmq_peer_discovery_consul plugin.

FROM rabbitmq:3.7.8-management
COPY rabbitmq.conf /etc/rabbitmq
RUN rabbitmq-plugins enable --offline rabbitmq_peer_discovery_consul

Now, let’s take a closer look on our plugin configuration settings. Because I run Docker on Windows Consul is not available under default localhost address, but on 192.168.99.100. So, first we need to set that IP address using property cluster_formation.consul.host. We also need to set Consul as a default peer discovery implementation by setting the name of plugin for property cluster_formation.peer_discovery_backend. Finally, we have to set two additional properties to make it work in our local Docker environment. It is related with the address of RabbitMQ node sent to Consul during registration process. It is important to compute it properly, and not to send for example localhost. After setting property cluster_formation.consul.svc_addr_use_nodename to false node will register itself using host name instead of node name. We can set the name of host for container inside its running command. Here’s my full RabbitMQ configuration file used in demo for this article.

loopback_users.guest = false
listeners.tcp.default = 5672
hipe_compile = false
management.listener.port = 15672
management.listener.ssl = false
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_consul
cluster_formation.consul.host = 192.168.99.100
cluster_formation.consul.svc_addr_auto = true
cluster_formation.consul.svc_addr_use_nodename = false

After saving the configuration visible above in the file rabbitmq.conf we can proceed to building our custom Docker image with RabbitMQ. This image is available in my Docker repository under alias piomin/rabbitmq, but you can also build it by yourself from Dockerfile by executing the following command.

$ docker build -t piomin/rabbitmq:1.0 .
Sending build context to Docker daemon  3.072kB
Step 1 : FROM rabbitmq:3.7.8-management
 ---> d69a5113ceae
Step 2 : COPY rabbitmq.conf /etc/rabbitmq
 ---> aa306ef88085
Removing intermediate container fda0e21178f9
Step 3 : RUN rabbitmq-plugins enable --offline rabbitmq_peer_discovery_consul
 ---> Running in 0892a42bffef
The following plugins have been configured:
  rabbitmq_management
  rabbitmq_management_agent
  rabbitmq_peer_discovery_common
  rabbitmq_peer_discovery_consul
  rabbitmq_web_dispatch
Applying plugin configuration to rabbit@fda0e21178f9...
The following plugins have been enabled:
  rabbitmq_peer_discovery_common
  rabbitmq_peer_discovery_consul

set 5 plugins.
Offline change; changes will take effect at broker restart.
 ---> cfe73f9d9904
Removing intermediate container 0892a42bffef
Successfully built cfe73f9d9904

2. Running RabbitMQ cluster on Docker

In the previous step we have succesfully created Docker image of RabbitMQ configured to run in cluster mode using Consul discovery. Before running this image we need to start instance of Consul. Here’s the command that starts Docker container with Consul and exposing it on port 8500.

$ docker run -d --name consul -p 8500:8500 consul

We will also create Docker network to enable communication between containers by hostname. It is required in this scenario, because each RabbitMQ container is register itself using container hostname.

$ docker network create rabbitmq

Now, we can run our three clustered RabbitMQ containers. We will set unique hostname for every single container (using -h option) and set the same Docker network everywhere. We also have to set container environment variable RABBITMQ_ERLANG_COOKIE.

$ docker run -d --name rabbit1 -h rabbit1 --network rabbitmq -p 30000:5672 -p 30010:15672 -e RABBITMQ_ERLANG_COOKIE='rabbitmq' piomin/rabbitmq:1.0
$ docker run -d --name rabbit2 -h rabbit2 --network rabbitmq -p 30001:5672 -p 30011:15672 -e RABBITMQ_ERLANG_COOKIE='rabbitmq' piomin/rabbitmq:1.0
$ docker run -d --name rabbit3 -h rabbit3 --network rabbitmq -p 30002:5672 -p 30012:15672 -e RABBITMQ_ERLANG_COOKIE='rabbitmq' piomin/rabbitmq:1.0

After running all three instances of RabbitMQ we can first take a look on Consul web console. You should see there the new service called rabbitmq. This value is the default name of cluster set by RabbitMQ Consul plugin. We can override inside rabbitmq.conf using cluster_formation.consul.svc property.

rabbit-consul-1

We can check out if cluster has been succesfully started using RabbitMQ web management console. Every node is exposing it. I just had to override default port 15672 to avoid port conflicts between three running instances.

rabbit-consul-10

3. Integrating RabbitMQ with Vault

In the two previous steps we have succesfully run the cluster of three RabbitMQ nodes based on Consul discovery. Now, we will include Vault to our sample system to dynamically generate user credentials. Let’s begin from running Vault on Docker. You can find detailed information about it in my previous article Secure Spring Cloud Microservices with Vault and Nomad. We will run Vault in development mode using the following command.

$ docker run --cap-add=IPC_LOCK -d --name vault -p 8200:8200 vault

You can copy the root token from container logs using docker logs -f vault command. Then you have to login to Vault web console available under address http://192.168.99.100:8200 using this token and enable RabbitMQ secret engine as shown below.

rabbit-consul-2

And confirm.

rabbit-consul-3

You can easily run Vault commands using terminal provided by web admin console or do the same thing using HTTP API. The first command visible below is used for writing connection details. We just need to pass RabbitMQ address and admin user credentials. The provided configuration settings points to #1 RabbitMQ node, but the changes are then replicated to the whole cluster.

$ vault write rabbitmq/config/connection connection_uri="http://192.168.99.100:30010" username="guest" password="guest"

The next step is to configure a role that maps a name in Vault to virtual host permissions.

$ vault write rabbitmq/roles/default vhosts='{"/":{"write": ".*", "read": ".*"}}'

We can test our newly created configuration by running command vault read rabbitmq/creds/default as shown below.

rabbit-consul-4

4. Sample application

Our sample application is pretty simple. It consists of two modules. First of them sender is responsible for sending messages to RabbitMQ, while second listener for receiving incoming messages. Both of them are Spring Boot applications that integrates with RabbitMQ and Vault using the following dependencies.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-vault-config-rabbitmq</artifactId>
	<version>2.0.2.RELEASE</version>
</dependency>

We need to provide some configuration settings in bootstrap.yml file to integrate our application with Vault. First, we need to enable plugin for that integration by setting property spring.cloud.vault.rabbitmq.enabled to true. Of course, Vault address and root token are required. It is also important to set property spring.cloud.vault.rabbitmq.role with the name of Vault role configured in step 3. Spring Cloud Vault injects username and password generated by Vault to the application properties spring.rabbitmq.username and spring.rabbitmq.password, so the only thing we need to configure in bootstrap.yml file is the list of available cluster nodes.

spring:
  rabbitmq:
    addresses: 192.168.99.100:30000,192.168.99.100:30001,192.168.99.100:30002
  cloud:
    vault:
      uri: http://192.168.99.100:8200
      token: s.7DaENeiqLmsU5ZhEybBCRJhp
      rabbitmq:
        enabled: true
        role: default
        backend: rabbitmq

For the test purposes you should enable high-available queues on RabbitMQ. For instructions how to configure them using policies you can refer to my article RabbitMQ in cluster. The application works at the level of exchanges. Auto-configured connection factory is injected into the application and set for RabbitTemplate bean.

@SpringBootApplication
public class Sender {
	
	private static final Logger LOGGER = LoggerFactory.getLogger("Sender");
	
	@Autowired
	RabbitTemplate template;

	public static void main(String[] args) {
		SpringApplication.run(Sender.class, args);
	}

	@PostConstruct
	public void send() {
		for (int i = 0; i < 1000; i++) {
			int id = new Random().nextInt(100000);
			template.convertAndSend(new Order(id, "TEST"+id, OrderType.values()[(id%2)]));
		}
		LOGGER.info("Sending completed.");
	}
    
    @Bean
    public RabbitTemplate template(ConnectionFactory connectionFactory) {
        RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
        rabbitTemplate.setExchange("ex.example");
        return rabbitTemplate;
    }
    
}

Our listener app is connected only to the third node of the cluster (spring.rabbitmq.addresses=192.168.99.100:30002). However, the test queue is mirrored between all clustered nodes, so it is able to receive messages sent by sender app. You can easily test using my sample applications.

@SpringBootApplication
@EnableRabbit
public class Listener {

	private static final  Logger LOGGER = LoggerFactory.getLogger("Listener");

	private Long timestamp;

	public static void main(String[] args) {
		SpringApplication.run(Listener.class, args);
	}

	@RabbitListener(queues = "q.example")
	public void onMessage(Order order) {
		if (timestamp == null)
			timestamp = System.currentTimeMillis();
		LOGGER.info((System.currentTimeMillis() - timestamp) + " : " + order.toString());
	}

	@Bean
	public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
		SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
		factory.setConnectionFactory(connectionFactory);
		factory.setConcurrentConsumers(10);
		factory.setMaxConcurrentConsumers(20);
		return factory;
	}
	
}

Integration tests on OpenShift using Arquillian Cube and Istio

Building integration tests for applications deployed on Kubernetes/OpenShift platforms seems to be quite a big challenge. With Arquillian Cube, an Arquillian extension for managing Docker containers, it is not complicated. Kubernetes extension, being a part of Arquillian Cube, helps you write and run integration tests for your Kubernetes/Openshift application. It is responsible for creating and managing temporary namespace for your tests, applying all Kubernetes resources required to setup your environment and once everything is ready it will just run defined integration tests.
The one very good information related to Arquillian Cube is that it supports Istio framework. You can apply Istio resources before executing tests. One of the most important features of Istio is an ability to control of traffic behavior with rich routing rules, retries, delays, failovers, and fault injection. It allows you to test some unexpected situations during network communication between microservices like server errors or timeouts.
If you would like to run some tests using Istio resources on Minishift you should first install it on your platform. To do that you need to change some privileges for your OpenShift user. Let’s do that.

1. Enabling Istio on Minishift

Istio requires some high-level privileges to be able to run on OpenShift. To add those privileges to the current user we need to login as an user with cluster admin role. First, we should enable admin-user addon on Minishift by executing the following command.

$ minishift addons enable admin-user

After that you would be able to login as system:admin user, which has cluster-admin role. With this user you can also add cluster-admin role to other users, for example admin. Let’s do that.

$ oc login -u system:admin
$ oc adm policy add-cluster-role-to-user cluster-admin admin
$ oc login -u admin -p admin

Now, let’s create new project dedicated especially for Istio and then add some required privileges.

$ oc new-project istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-ingress-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z default -n istio-system
$ oc adm policy add-scc-to-user anyuid -z prometheus -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-egressgateway-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-citadel-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-ingressgateway-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-cleanup-old-ca-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-mixer-post-install-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-mixer-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-pilot-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-sidecar-injector-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-galley-service-account -n istio-system
$ oc adm policy add-scc-to-user privileged -z default -n myproject

Finally, we may proceed to Istio components installation. I downloaded the current newest version of Istio – 1.0.1. Installation file is available under install/kubernetes directory. You just have to apply it to your Minishift instance by calling oc apply command.

$ oc apply -f install/kubernetes/istio-demo.yaml

2. Enabling Istio for Arquillian Cube

I have already described how to use Arquillian Cube to run tests with OpenShift in the article Testing microservices on OpenShift using Arquillian Cube. In comparison with the sample described in that article we need to include dependency responsible for enabling Istio features.

<dependency>
	<groupId>org.arquillian.cube</groupId>
	<artifactId>arquillian-cube-istio-kubernetes</artifactId>
	<version>1.17.1</version>
	<scope>test</scope>
</dependency>

Now, we can use @IstioResource annotation to apply Istio resources into OpenShift cluster or IstioAssistant bean to be able to use some additional methods for adding, removing resources programmatically or polling an availability of URLs.
Let’s take a look on the following JUnit test class using Arquillian Cube with Istio support. In addition to the standard test created for running on OpenShift instance I have added Istio resource file customer-to-account-route.yaml. Then I have invoked method await provided by IstioAssistant. First test test1CustomerRoute creates new customer, so it needs to wait until customer-route is deployed on OpenShift. The next test test2AccountRoute adds account for the newly created customer, so it needs to wait until account-route is deployed on OpenShift. Finally, the test test3GetCustomerWithAccounts is ran, which calls the method responsible for finding customer by id with list of accounts. In that case customer-service calls method endpoint by account-service. As you have probably find out the last line of that test method contains an assertion to empty list of accounts: Assert.assertTrue(c.getAccounts().isEmpty()). Why? We will simulate the timeout in communication between customer-service and account-service using Istio rules.

@Category(RequiresOpenshift.class)
@RequiresOpenshift
@Templates(templates = {
        @Template(url = "classpath:account-deployment.yaml"),
        @Template(url = "classpath:deployment.yaml")
})
@RunWith(ArquillianConditionalRunner.class)
@IstioResource("classpath:customer-to-account-route.yaml")
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class IstioRuleTest {

    private static final Logger LOGGER = LoggerFactory.getLogger(IstioRuleTest.class);
    private static String id;

    @ArquillianResource
    private IstioAssistant istioAssistant;

    @RouteURL(value = "customer-route", path = "/customer")
    private URL customerUrl;
    @RouteURL(value = "account-route", path = "/account")
    private URL accountUrl;

    @Test
    public void test1CustomerRoute() {
        LOGGER.info("URL: {}", customerUrl);
        istioAssistant.await(customerUrl, r -> r.isSuccessful());
        LOGGER.info("URL ready. Proceeding to the test");
        OkHttpClient httpClient = new OkHttpClient();
        RequestBody body = RequestBody.create(MediaType.parse("application/json"), "{\"name\":\"John Smith\", \"age\":33}");
        Request request = new Request.Builder().url(customerUrl).post(body).build();
        try {
            Response response = httpClient.newCall(request).execute();
            ResponseBody b = response.body();
            String json = b.string();
            LOGGER.info("Test: response={}", json);
            Assert.assertNotNull(b);
            Assert.assertEquals(200, response.code());
            Customer c = Json.decodeValue(json, Customer.class);
            this.id = c.getId();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    @Test
    public  void test2AccountRoute() {
        LOGGER.info("Route URL: {}", accountUrl);
        istioAssistant.await(accountUrl, r -> r.isSuccessful());
        LOGGER.info("URL ready. Proceeding to the test");
        OkHttpClient httpClient = new OkHttpClient();
        RequestBody body = RequestBody.create(MediaType.parse("application/json"), "{\"number\":\"01234567890\", \"balance\":10000, \"customerId\":\"" + this.id + "\"}");
        Request request = new Request.Builder().url(accountUrl).post(body).build();
        try {
            Response response = httpClient.newCall(request).execute();
            ResponseBody b = response.body();
            String json = b.string();
            LOGGER.info("Test: response={}", json);
            Assert.assertNotNull(b);
            Assert.assertEquals(200, response.code());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    @Test
    public void test3GetCustomerWithAccounts() {
        String url = customerUrl + "/" + id;
        LOGGER.info("Calling URL: {}", customerUrl);
        OkHttpClient httpClient = new OkHttpClient();
        Request request = new Request.Builder().url(url).get().build();
        try {
            Response response = httpClient.newCall(request).execute();
            String json = response.body().string();
            LOGGER.info("Test: response={}", json);
            Assert.assertNotNull(response.body());
            Assert.assertEquals(200, response.code());
            Customer c = Json.decodeValue(json, Customer.class);
            Assert.assertTrue(c.getAccounts().isEmpty());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

}

3. Creating Istio rules

On of the interesting features provided by Istio is an availability of injecting faults to the route rules. we can specify one or more faults to inject while forwarding HTTP requests to the rule’s corresponding request destination. The faults can be either delays or aborts. We can define a percentage level of error using percent field for the both types of fault. In the following Istio resource I have defines 2 seconds delay for every single request sent to account-service.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: account-service
spec:
  hosts:
    - account-service
  http:
  - fault:
      delay:
        fixedDelay: 2s
        percent: 100
    route:
    - destination:
        host: account-service
        subset: v1

Besides VirtualService we also need to define DestinationRule for account-service. It is really simple – we have just define version label of the target service.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: account-service
spec:
  host: account-service
  subsets:
  - name: v1
    labels:
      version: v1

Before running the test we should also modify OpenShift deployment templates of our sample applications. We need to inject some Istio resources into the pods definition using istioctl kube-inject command as shown below.

$ istioctl kube-inject -f deployment.yaml -o customer-deployment-istio.yaml
$ istioctl kube-inject -f account-deployment.yaml -o account-deployment-istio.yaml

Finally, we may rewrite generated files into OpenShift templates. Here’s the fragment of Openshift template containing DeploymentConfig definition for account-service.

kind: Template
apiVersion: v1
metadata:
  name: account-template
objects:
  - kind: DeploymentConfig
    apiVersion: v1
    metadata:
      name: account-service
      labels:
        app: account-service
        name: account-service
        version: v1
    spec:
      template:
        metadata:
          annotations:
            sidecar.istio.io/status: '{"version":"364ad47b562167c46c2d316a42629e370940f3c05a9b99ccfe04d9f2bf5af84d","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
          name: account-service
          labels:
            app: account-service
            name: account-service
            version: v1
        spec:
          containers:
          - env:
            - name: DATABASE_NAME
              valueFrom:
                secretKeyRef:
                  key: database-name
                  name: mongodb
            - name: DATABASE_USER
              valueFrom:
                secretKeyRef:
                  key: database-user
                  name: mongodb
            - name: DATABASE_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: database-password
                  name: mongodb
            image: piomin/account-vertx-service
            name: account-vertx-service
            ports:
            - containerPort: 8095
            resources: {}
          - args:
            - proxy
            - sidecar
            - --configPath
            - /etc/istio/proxy
            - --binaryPath
            - /usr/local/bin/envoy
            - --serviceCluster
            - account-service
            - --drainDuration
            - 45s
            - --parentShutdownDuration
            - 1m0s
            - --discoveryAddress
            - istio-pilot.istio-system:15007
            - --discoveryRefreshDelay
            - 1s
            - --zipkinAddress
            - zipkin.istio-system:9411
            - --connectTimeout
            - 10s
            - --statsdUdpAddress
            - istio-statsd-prom-bridge.istio-system:9125
            - --proxyAdminPort
            - "15000"
            - --controlPlaneAuthPolicy
            - NONE
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: INSTANCE_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: ISTIO_META_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: ISTIO_META_INTERCEPTION_MODE
              value: REDIRECT
            image: gcr.io/istio-release/proxyv2:1.0.1
            imagePullPolicy: IfNotPresent
            name: istio-proxy
            resources:
              requests:
                cpu: 10m
            securityContext:
              readOnlyRootFilesystem: true
              runAsUser: 1337
            volumeMounts:
            - mountPath: /etc/istio/proxy
              name: istio-envoy
            - mountPath: /etc/certs/
              name: istio-certs
              readOnly: true
          initContainers:
          - args:
            - -p
            - "15001"
            - -u
            - "1337"
            - -m
            - REDIRECT
            - -i
            - '*'
            - -x
            - ""
            - -b
            - 8095,
            - -d
            - ""
            image: gcr.io/istio-release/proxy_init:1.0.1
            imagePullPolicy: IfNotPresent
            name: istio-init
            resources: {}
            securityContext:
              capabilities:
                add:
                - NET_ADMIN
          volumes:
          - emptyDir:
              medium: Memory
            name: istio-envoy
          - name: istio-certs
            secret:
              optional: true
              secretName: istio.default

4. Building applications

The sample applications are implemented using Eclipse Vert.x framework. They use Mongo database for storing data. The connection settings are injected into pods using Kubernetes Secrets.

public class MongoVerticle extends AbstractVerticle {

	private static final Logger LOGGER = LoggerFactory.getLogger(MongoVerticle.class);

	@Override
	public void start() throws Exception {
		ConfigStoreOptions envStore = new ConfigStoreOptions()
				.setType("env")
				.setConfig(new JsonObject().put("keys", new JsonArray().add("DATABASE_USER").add("DATABASE_PASSWORD").add("DATABASE_NAME")));
		ConfigRetrieverOptions options = new ConfigRetrieverOptions().addStore(envStore);
		ConfigRetriever retriever = ConfigRetriever.create(vertx, options);
		retriever.getConfig(r -> {
			String user = r.result().getString("DATABASE_USER");
			String password = r.result().getString("DATABASE_PASSWORD");
			String db = r.result().getString("DATABASE_NAME");
			JsonObject config = new JsonObject();
			LOGGER.info("Connecting {} using {}/{}", db, user, password);
			config.put("connection_string", "mongodb://" + user + ":" + password + "@mongodb/" + db);
			final MongoClient client = MongoClient.createShared(vertx, config);
			final CustomerRepository service = new CustomerRepositoryImpl(client);
			ProxyHelper.registerService(CustomerRepository.class, vertx, service, "customer-service");	
		});
	}
}

MongoDB should be started on OpenShift before starting any applications, which connect to it. To achieve it we should insert Mongo deployment resource into Arquillian configuration file as env.config.resource.name field.
The configuration of Arquillian Cube is visible below. We will use an existing namespace myproject, which has already granted the required privileges (see Step 1). We also need to pass authentication token of user admin. You can collect it using command oc whoami -t after login to OpenShift cluster.

<extension qualifier="openshift">
	<property name="namespace.use.current">true</property>
	<property name="namespace.use.existing">myproject</property>
	<property name="kubernetes.master">https://192.168.99.100:8443</property>
	<property name="cube.auth.token">TYYccw6pfn7TXtH8bwhCyl2tppp5MBGq7UXenuZ0fZA</property>
	<property name="env.config.resource.name">mongo-deployment.yaml</property>
</extension>

The communication between customer-service and account-service is realized by Vert.x WebClient. We will set read timeout for the client to 1 second. Because Istio injects 2 seconds delay into the route, the communication is going to end with timeout.

public class AccountClient {

	private static final Logger LOGGER = LoggerFactory.getLogger(AccountClient.class);
	private Vertx vertx;

	public AccountClient(Vertx vertx) {
		this.vertx = vertx;
	}
	
	public AccountClient findCustomerAccounts(String customerId, Handler<AsyncResult<List>> resultHandler) {
		WebClient client = WebClient.create(vertx);
		client.get(8095, "account-service", "/account/customer/" + customerId).timeout(1000).send(res2 -> {
			if (res2.succeeded()) {
				LOGGER.info("Response: {}", res2.result().bodyAsString());
				List accounts = res2.result().bodyAsJsonArray().stream().map(it -> Json.decodeValue(it.toString(), Account.class)).collect(Collectors.toList());
				resultHandler.handle(Future.succeededFuture(accounts));
			} else {
				resultHandler.handle(Future.succeededFuture(new ArrayList()));
			}
		});
		return this;
	}
}

The full code of sample applications is available on GitHub in the repository https://github.com/piomin/sample-vertx-kubernetes/tree/openshift-istio-tests.

5. Running tests

You can the tests during Maven build or just using your IDE. As the first test1CustomerRoute test is executed. It adds new customer and save generated id for two next tests.

arquillian-istio-3

The next test is test2AccountRoute. It adds an account for the customer created during previous test.

arquillian-istio-2

Finally, the test responsible for verifying communication between microservices is running. It verifies if the list of accounts is empty, what is a result of timeout in communication with account-service.

arquillian-istio-1

Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker

Here’s the next article in a series of “Quick Guide to…”. This time we will discuss and run examples of Spring Boot microservices on Kubernetes. The structure of that article will be quite similar to this one Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud, as they are describing the same aspects of applications development. I’m going to focus on showing you the differences and similarities in development between for Spring Cloud and for Kubernetes. The topics covered in this article are:

  • Using Spring Boot 2.0 in cloud-native development
  • Providing service discovery for all microservices using Spring Cloud Kubernetes project
  • Injecting configuration settings into application pods using Kubernetes Config Maps and Secrets
  • Building application images using Docker and deploying them on Kubernetes using YAML configuration files
  • Using Spring Cloud Kubernetes together with Zuul proxy to expose a single Swagger API documentation for all microservices

Spring Cloud and Kubernetes may be threaten as a competitive solutions when you build microservices environment. Such components like Eureka, Spring Cloud Config or Zuul provided by Spring Cloud may be replaced by built-in Kubernetes objects like services, config maps, secrets or ingresses. But even if you decide to use Kubernetes components instead of Spring Cloud you can take advantage of some interesting features provided throughout the whole Spring Cloud project.

The one raelly interesting project that helps us in development is Spring Cloud Kubernetes (https://github.com/spring-cloud-incubator/spring-cloud-kubernetes). Although it is still in incubation stage it is definitely worth to dedicating some time to it. It integrates Spring Cloud with Kubernetes. I’ll show you how to use implementation of discovery client, inter-service communication with Ribbon client and Zipkin discovery using Spring Cloud Kubernetes.

Before we proceed to the source code, let’s take a look on the following diagram. It illustrates the architecture of our sample system. It is quite similar to the architecture presented in the already mentioned article about microservices on Spring Cloud. There are three independent applications (employee-service, department-service, organization-service), which communicate between each other through REST API. These Spring Boot microservices use some build-in mechanisms provided by Kubernetes: config maps and secrets for distributed configuration, etcd for service discovery, and ingresses for API gateway.

micro-kube-1

Let’s proceed to the implementation. Currently, the newest stable version of Spring Cloud is Finchley.RELEASE. This version of spring-cloud-dependencies should be declared as a BOM for dependency management.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Finchley.RELEASE</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

Spring Cloud Kubernetes is not released under Spring Cloud Release Trains. So, we need to explicitly define its version. Because we use Spring Boot 2.0 we have to include the newest SNAPSHOT version of spring-cloud-kubernetes artifacts, which is 0.3.0.BUILD-SNAPSHOT.

The source code of sample applications presented in this article is available on GitHub in repository https://github.com/piomin/sample-spring-microservices-kubernetes.git.

Pre-requirements

In order to be able to deploy and test our sample microservices we need to prepare a development environment. We can realize that in the following steps:

  • You need at least a single node cluster instance of Kubernetes (Minikube) or Openshift (Minishift) running on your local machine. You should start it and expose embedded Docker client provided by both of them. The detailed intruction for Minishift may be found there: Quick guide to deploying Java apps on OpenShift. You can also use that description to run Minikube – just replace word ‘minishift’ with ‘minikube’. In fact, it does not matter if you choose Kubernetes or Openshift – the next part of this tutorial would be applicable for both of them
  • Spring Cloud Kubernetes requires access to Kubernetes API in order to be able to retrieve a list of address of pods running for a single service. If you use Kubernetes you should just execute the following command:
$ kubectl create clusterrolebinding admin --clusterrole=cluster-admin --serviceaccount=default:default

If you deploy your microservices on Minishift you should first enable admin-user addon, then login as a cluster admin, and grant required permissions.

$ minishift addons enable admin-user
$ oc login -u system:admin
$ oc policy add-role-to-user cluster-reader system:serviceaccount:myproject:default
  • All our sample microservices use MongoDB as a backend store. So, you should first run an instance of this database on your node. With Minishift it is quite simple, as you can use predefined templates just by selecting service Mongo on the Catalog list. With Kubernetes the task is more difficult. You have to prepare deployment configuration files by yourself and apply it to the cluster. All the configuration files are available under kubernetes directory inside sample Git repository. To apply the following YAML definition to the cluster you should execute command kubectl apply -f kubernetes\mongo-deployment.yaml. After it Mongo database would be available under the name mongodb inside Kubernetes cluster.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
      - name: mongodb
        image: mongo:latest
        ports:
        - containerPort: 27017
        env:
        - name: MONGO_INITDB_DATABASE
          valueFrom:
            configMapKeyRef:
              name: mongodb
              key: database-name
        - name: MONGO_INITDB_ROOT_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-user
        - name: MONGO_INITDB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-password
---
apiVersion: v1
kind: Service
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  ports:
  - port: 27017
    protocol: TCP
  selector:
    app: mongodb

1. Inject configuration with Config Maps and Secrets

When using Spring Cloud the most obvious choice for realizing distributed configuration in your system is Spring Cloud Config. With Kubernetes you can use Config Map. It holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data. It is used for storing and sharing non-sensitive, unencrypted configuration information. To use sensitive information in your clusters, you must use Secrets. An usage of both these Kubernetes objects can be perfectly demonstrated basing on the example of MongoDB connection settings. Inside Spring Boot application we can easily inject it using environment variables. Here’s fragment of application.yml file with URI configuration.

spring:
  data:
    mongodb:
      uri: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongodb/${MONGO_DATABASE}

While username or password are a sensitive fields, a database name is not. So we can place it inside config map.

apiVersion: v1
kind: ConfigMap
metadata:
  name: mongodb
data:
  database-name: microservices

Of course, username and password are defined as secrets.

apiVersion: v1
kind: Secret
metadata:
  name: mongodb
type: Opaque
data:
  database-password: MTIzNDU2
  database-user: cGlvdHI=

To apply the configuration to Kubernetes cluster we run the following commands.

$ kubectl apply -f kubernetes/mongodb-configmap.yaml
$ kubectl apply -f kubernetes/mongodb-secret.yaml

After it we should inject the configuration properties into application’s pods. When defining container configuration inside Deployment YAML file we have to include references to environment variables and secrets as shown below

apiVersion: apps/v1
kind: Deployment
metadata:
  name: employee
  labels:
    app: employee
spec:
  replicas: 1
  selector:
    matchLabels:
      app: employee
  template:
    metadata:
      labels:
        app: employee
    spec:
      containers:
      - name: employee
        image: piomin/employee:1.0
        ports:
        - containerPort: 8080
        env:
        - name: MONGO_DATABASE
          valueFrom:
            configMapKeyRef:
              name: mongodb
              key: database-name
        - name: MONGO_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-user
        - name: MONGO_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-password

2. Building service discovery with Kubernetes

We usually running microservices on Kubernetes using Docker containers. One or more containers are grouped by pods, which are the smallest deployable units created and managed in Kubernetes. A good practice is to run only one container inside a single pod. If you would like to scale up your microservice you would just have to increase a number of running pods. All running pods that belong to a single microservice are logically grouped by Kubernetes Service. This service may be visible outside the cluster, and is able to load balance incoming requests between all running pods. The following service definition groups all pods labelled with field app equaled to employee.

apiVersion: v1
kind: Service
metadata:
  name: employee
  labels:
    app: employee
spec:
  ports:
  - port: 8080
    protocol: TCP
  selector:
    app: employee

Service can be used for accessing application outside Kubernetes cluster or for inter-service communication inside a cluster. However, the communication between microservices can be implemented more comfortable with Spring Cloud Kubernetes. First we need to include the following dependency to project pom.xml.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>

Then we should enable discovery client for an application – the same as we have always done for discovery Spring Cloud Netflix Eureka. This allows you to query Kubernetes endpoints (services) by name. This discovery feature is also used by the Spring Cloud Kubernetes Ribbon or Zipkin projects to fetch respectively the list of the pods defined for a microservice to be load balanced or the Zipkin servers available to send the traces or spans.

@SpringBootApplication
@EnableDiscoveryClient
@EnableMongoRepositories
@EnableSwagger2
public class EmployeeApplication {

	public static void main(String[] args) {
		SpringApplication.run(EmployeeApplication.class, args);
	}
	
	// ...
}

The last important thing in this section is to guarantee that Spring application name would be exactly the same as Kubernetes service name for the application. For application employee-service it is employee.

spring:
  application:
    name: employee

3. Building microservice using Docker and deploying on Kubernetes

There is nothing unusual in our sample microservices. We have included some standard Spring dependencies for building REST-based microservices, integrating with MongoDB and generating API documentation using Swagger2.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.9.2</version>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

In order to integrate with MongoDB we should create interface that extends standard Spring Data CrudRepository.

public interface EmployeeRepository extends CrudRepository {
	
	List findByDepartmentId(Long departmentId);
	List findByOrganizationId(Long organizationId);
	
}

Entity class should be annotated with Mongo @Document and a primary key field with @Id.

@Document(collection = "employee")
public class Employee {

	@Id
	private String id;
	private Long organizationId;
	private Long departmentId;
	private String name;
	private int age;
	private String position;
	
	// ...
	
}

The repository bean has been injected to the controller class. Here’s the full implementation of our REST API inside employee-service.

@RestController
public class EmployeeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);
	
	@Autowired
	EmployeeRepository repository;
	
	@PostMapping("/")
	public Employee add(@RequestBody Employee employee) {
		LOGGER.info("Employee add: {}", employee);
		return repository.save(employee);
	}
	
	@GetMapping("/{id}")
	public Employee findById(@PathVariable("id") String id) {
		LOGGER.info("Employee find: id={}", id);
		return repository.findById(id).get();
	}
	
	@GetMapping("/")
	public Iterable findAll() {
		LOGGER.info("Employee find");
		return repository.findAll();
	}
	
	@GetMapping("/department/{departmentId}")
	public List findByDepartment(@PathVariable("departmentId") Long departmentId) {
		LOGGER.info("Employee find: departmentId={}", departmentId);
		return repository.findByDepartmentId(departmentId);
	}
	
	@GetMapping("/organization/{organizationId}")
	public List findByOrganization(@PathVariable("organizationId") Long organizationId) {
		LOGGER.info("Employee find: organizationId={}", organizationId);
		return repository.findByOrganizationId(organizationId);
	}
	
}

In order to run our microservices on Kubernetes we should first build the whole Maven project with mvn clean install command. Each microservice has Dockerfile placed in the root directory. Here’s Dockerfile definition for employee-service.

FROM openjdk:8-jre-alpine
ENV APP_FILE employee-service-1.0-SNAPSHOT.jar
ENV APP_HOME /usr/apps
EXPOSE 8080
COPY target/$APP_FILE $APP_HOME/
WORKDIR $APP_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

Let’s build Docker images for all three sample microservices.

$ cd employee-service
$ docker build -t piomin/employee:1.0 .
$ cd department-service
$ docker build -t piomin/department:1.0 .
$ cd organization-service
$ docker build -t piomin/organization:1.0 .

The last step is to deploy Docker containers with applications on Kubernetes. To do that just execute commands kubectl apply on YAML configuration files. The sample deployment file for employee-service has been demonstrated in step 1. All required deployment fields are available inside project repository in kubernetes directory.

$ kubectl apply -f kubernetes\employee-deployment.yaml
$ kubectl apply -f kubernetes\department-deployment.yaml
$ kubectl apply -f kubernetes\organization-deployment.yaml

4. Communication between microservices with Spring Cloud Kubernetes Ribbon

All the microservice are deployed on Kubernetes. Now, it’s worth to discuss some aspects related to inter-service communication. Application employee-service in contrast to other microservices did not invoke any other microservices. Let’s take a look on to other microservices that calls API exposed by employee-service and communicates between each other (organization-service calls department-service API).
First we need to include some additional dependencies to the project. We use Spring Cloud Ribbon and OpenFeign. Alternatively you can also use Spring @LoadBalanced RestTemplate.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>

Here’s the main class of department-service. It enables Feign client using @EnableFeignClients annotation. It works the same as with discovery based on Spring Cloud Netflix Eureka. OpenFeign uses Ribbon for client-side load balancing. Spring Cloud Kubernetes Ribbon provides some beans that forces Ribbon to communicate with Kubernetes API through Fabric8 KubernetesClient.

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
@EnableMongoRepositories
@EnableSwagger2
public class DepartmentApplication {
	
	public static void main(String[] args) {
		SpringApplication.run(DepartmentApplication.class, args);
	}
	
	// ...
	
}

Here’s implementation of Feign client for calling method exposed by employee-service.

@FeignClient(name = "employee")
public interface EmployeeClient {

	@GetMapping("/department/{departmentId}")
	List findByDepartment(@PathVariable("departmentId") String departmentId);
	
}

Finally, we have to inject Feign client’s beans to the REST controller. Now, we may call the method defined inside EmployeeClient, which is equivalent to calling REST endpoints.

@RestController
public class DepartmentController {

	private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentController.class);
	
	@Autowired
	DepartmentRepository repository;
	@Autowired
	EmployeeClient employeeClient;
	
	// ...
	
	@GetMapping("/organization/{organizationId}/with-employees")
	public List findByOrganizationWithEmployees(@PathVariable("organizationId") Long organizationId) {
		LOGGER.info("Department find: organizationId={}", organizationId);
		List departments = repository.findByOrganizationId(organizationId);
		departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
		return departments;
	}
	
}

5. Building API gateway using Kubernetes Ingress

An Ingress is a collection of rules that allow incoming requests to reach the downstream services. In our microservices architecture ingress is playing a role of an API gateway. To create it we should first prepare YAML description file. The descriptor file should contain the hostname under which the gateway will be available and mapping rules to the downstream services.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gateway-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  backend:
    serviceName: default-http-backend
    servicePort: 80
  rules:
  - host: microservices.info
    http:
      paths:
      - path: /employee
        backend:
          serviceName: employee
          servicePort: 8080
      - path: /department
        backend:
          serviceName: department
          servicePort: 8080
      - path: /organization
        backend:
          serviceName: organization
          servicePort: 8080

You have to execute the following command to apply the configuration visible above to the Kubernetes cluster.

$ kubectl apply -f kubernetes\ingress.yaml

For testing this solution locally we have to insert the mapping between IP address and hostname set in ingress definition inside hosts file as shown below. After it we can services through ingress using defined hostname just like that: http://microservices.info/employee.

192.168.99.100 microservices.info

You can check the details of created ingress just by executing command kubectl describe ing gateway-ingress.
micro-kube-2

6. Enabling API specification on gateway using Swagger2

Ok, what if we would like to expose single swagger documentation for all microservices deployed on Kubernetes? Well, here the things are getting complicated… We can run container with Swagger UI, and map all paths exposed by the ingress manually, but it is rather not a good solution…
In that case we can use Spring Cloud Kubernetes Ribbon one more time – this time together with Spring Cloud Netflix Zuul. Zuul will act as gateway only for serving Swagger API.
Here’s the list of dependencies used in my gateway-service project.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-zuul</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger-ui</artifactId>
	<version>2.9.2</version>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.9.2</version>
</dependency>

Kubernetes discovery client will detect all services exposed on cluster. We would like to display documentation only for our three microservices. That’s why I defined the following routes for Zuul.

zuul:
  routes:
    department:
      path: /department/**
    employee:
      path: /employee/**
    organization:
      path: /organization/**

Now we can use ZuulProperties bean to get routes addresses from Kubernetes discovery, and configure them as Swagger resources as shown below.

@Configuration
public class GatewayApi {

	@Autowired
	ZuulProperties properties;

	@Primary
	@Bean
	public SwaggerResourcesProvider swaggerResourcesProvider() {
		return () -> {
			List resources = new ArrayList();
			properties.getRoutes().values().stream()
					.forEach(route -> resources.add(createResource(route.getId(), "2.0")));
			return resources;
		};
	}

	private SwaggerResource createResource(String location, String version) {
		SwaggerResource swaggerResource = new SwaggerResource();
		swaggerResource.setName(location);
		swaggerResource.setLocation("/" + location + "/v2/api-docs");
		swaggerResource.setSwaggerVersion(version);
		return swaggerResource;
	}

}

Application gateway-service should be deployed on cluster the same as other applications. You can the list of running service by executing command kubectl get svc. Swagger documentation is available under address http://192.168.99.100:31237/swagger-ui.html.
micro-kube-3

Conclusion

I’m actually rooting for Spring Cloud Kubernetes project, which is still at the incubation stage. Kubernetes popularity as a platform is rapidly growing during some last months, but it still has some weaknesses. One of them is inter-service communication. Kubernetes doesn’t give us many mechanisms out-of-the-box, which allows configure more advanced rules. This a reason for creating frameworks for service mesh on Kubernetes like Istio or Linkerd. While these projects are still relatively new solutions, Spring Cloud is stable, opinionated framework. Why not to use to provide service discovery, inter-service communication or load balancing? Thanks to Spring Cloud Kubernetes it is possible.