Kotlin Microservices with Ktor

Ktor is a framework for building asynchronous applications on the server and client side. It is fully written in Kotlin. The main goal of Ktor is to provide an end-to-end multiplatform application framework for connected applications. It allows to easily build web applications and HTTP services, so we can be use it for building microservices-based architecture. Let’s discuss the main features of Ktor framework by the example of a simple system consisting of two microservices.

1. Setting up an environment

We can use Gradle or Maven for setting up our build environment. The first goal is to add some dedicated Maven repositories, because Ktor dependencies are not available in central repo.

<repositories>
    <repository>
        <id>ktor</id>
        <url>http://dl.bintray.com/kotlin/ktor</url>
    </repository>
    <repository>
        <id>kotlinx</id>
        <url>http://dl.bintray.com/kotlin/kotlinx</url>
    </repository>
    <repository>
        <id>jcenter</id>
        <url>http://jcenter.bintray.com</url>
    </repository>
</repositories>

The current version of Ktor framework is 0.9.5.

<properties>
    <ktor.version>0.9.5</ktor.version>
</properties>

Before adding any dependencies we should also configure kotlin-maven-plugin, and the compiler to avoid warnings when using Kotlin coroutines. They are still an experimental feature in Kotlin.

<plugin>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-maven-plugin</artifactId>
	<version>${kotlin.version}</version>
	<executions>
		<execution>
			<id>compile</id>
			<phase>compile</phase>
			<goals>
				<goal>compile</goal>
			</goals>
		</execution>
		<execution>
			<id>test-compile</id>
			<phase>test-compile</phase>
			<goals>
				<goal>test-compile</goal>
			</goals>
		</execution>
	</executions>
	<configuration>
		<jvmTarget>1.8</jvmTarget>
		<args>
			<arg>-Xcoroutines=enable</arg>
		</args>
	</configuration>
</plugin>

2. Sample applications

We will create two microservices built on top of Ktor Framework: account-service and customer-service. The application customer-service calls endpoint exposed by account-service for searching all accounts of a given customer. Both microservices expose some other endpoints for adding or finding objects. There is also third element in our architecture: discovery server. We will use HashiCorp’s Consul for that. Every instance of account-service would register itself in the discovery server, while customer-service would fetch the list of register instances to obtain their addresses and ports. In summary, that is a typical example of communication between microservices presented in some of my previous articles on this blog. The sample system architecture is visualized on the following diagram.

ktor-1

3. The Basics

First, we will add some dependencies required for running Ktor on the server side. Our application will be launched on Netty server, and will use Jackson library for JSON serialization.

<dependency>
	<groupId>io.ktor</groupId>
	<artifactId>ktor-server-core</artifactId>
	<version>${ktor.version}</version>
</dependency>
<dependency>
	<groupId>io.ktor</groupId>
	<artifactId>ktor-server-netty</artifactId>
	<version>${ktor.version}</version>
</dependency>
<dependency>
	<groupId>io.ktor</groupId>
	<artifactId>ktor-jackson</artifactId>
	<version>${ktor.version}</version>
</dependency>

After that we can create the main method that initializes embedded server based on Netty. Because, we would like to set server listen port dynamically during application startup, we use method that takes command line arguments as parameter. Thanks to that we will use parameter -port=TARGET_PORT, when starting every single instance of application.

fun main(args: Array<String>) {
   val server = embeddedServer(Netty, commandLineEnvironment(args))
   server.start(wait = true)
}

It is possible to implement different configuration strategies with Ktor, but the recommended way is through configuration file, called HOCON file. Here’s the application.conf file for customer-service.

ktor {
  deployment {
    port: 8095
  }
  application {
    modules = [ pl.piomin.services.CustomerApplicationModuleKt.main ]
  }
}

Each application takes a list of required Ktor modules as parameter. In fact, modules are the vital part of your application. These are user-defined functions receiving the Application class that is in charge of configuring the server pipeline, install features, registering routes, handling requests, etc. For the configuration visible above, the method main that defines our module is available inside file CustomerApplicationModule.kt located inside package pl.piomin.services.
Inside that method you can find the declarations of features provided by Ktor framework and used by the application. This is the most important part of our code, because it contains almost all the logic implemented by the application. The most commonly used feature is Routing. It defines all the HTTP API endpoints exposed by our application. In the following fragment of code I have defined four endpoint: a single POST method for adding new account, and three GET endpoints providing different find methods.

package pl.piomin.services
fun Application.main() {
   val repository = AccountRepository()
   //...
   routing {
      get("/accounts") {
         call.respond(message = repository.accounts)
      }
      get("/accounts/{id}") {
         val id: String? = call.parameters["id"]
         if (id != null)
            call.respond(message = repository.accounts.filter { it.id == id.toInt() })
      }
      get("/accounts/customer/{customerId}") {
         val customerId: String? = call.parameters["customerId"]
         if (customerId != null)
            call.respond(message = repository.accounts.filter { it.customerId == customerId.toInt() })
      }
      post("/accounts") {
         var account: Account = call.receive()
         account.id = repository.accounts.size + 1
         repository.addAccount(account)
         log.info("$account")
         call.respond(message = account)
      }
   }
}

4. Using built-in features

A Ktor application typically consists of a series of features. You can think of features as functionality that is injected into the request and response pipeline. Usually, an application would have a series of features such as DefaultHeaders which add headers to every outgoing response or Routing which allows us to define routes to handle requests, etc. We can create our own custom features, but there is also a set of built-in features implemented as Ktor modules. We can install any of built-in features just by passing the class name inside install statement. Here’s the list of features I used in the sample applications:

  • ContentNegotation – it provides automatic content conversion according to Content-Type and Accept headers. I have used Jackson library for converting between objects and JSON content
  • Metrics – it provides implementation for generating metrics with useful information about the server and the requests. We can use different exporters by selecting from modules provided by Dropwizard Metrics library. I decided to use Slf4jReporter that prints metric values to the log destination. But you could also use, for example InfluxReporter for sending metrics to InfluxDB
  • CallLogging – it is used for logging the incoming client requests. It leverages the ApplicationEnvironment.log that uses slf4j, so you can easily configure the output
  • CallId – this feature allows to identify a request/call by generating request id and can work along the CallLogging feature. The CallLogging feature add generated value to MDC by calling method callIdMdc with MDC field name as a parameter

Here’s the fragment of code responsible for defining Ktor features used by the application.

install(ContentNegotiation) {
	jackson {
	}
}
install(Metrics) {
	Slf4jReporter.forRegistry(registry).outputTo(log).build().start(10, TimeUnit.SECONDS)
}
install(CallLogging) {
	level = Level.TRACE
	callIdMdc("X-Request-ID")
}
install(CallId) {
	generate(10)
}

To be able to working with Metrics feature we first need to include the following dependency to pom.xml file.

<dependency>
	<groupId>io.ktor</groupId>
	<artifactId>ktor-metrics</artifactId>
	<version>${ktor.version}</version>
</dependency>

5. Configure logging

The logging configuration for Ktor framework is based on SLF4J. You just need to include dependency of logging provider to pom.xml. It can be Logback.

<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-classic</artifactId>
    <version>1.2.3</version>
</dependency>

Then you need to create logback.xml configuration file, and place it in src/main/resources directory. The following configuration forces SLF4J to print logs only to console. It includes MDC field X-Request-ID generated using CallId feature.

<configuration>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{YYYY-MM-dd HH:mm:ss.SSS} [%thread] [%X{X-Request-ID}] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <root level="trace">
        <appender-ref ref="STDOUT"/>
    </root>

    <logger name="org.eclipse.jetty" level="INFO"/>
    <logger name="io.netty" level="INFO"/>
</configuration>

Now, you are able to use logger in your application just by calling io.ktor.application.log instance.

import io.ktor.application.log
// ...
post("/accounts") {
	var account: Account = call.receive()
	account.id = repository.accounts.size + 1
	repository.addAccount(account)
	log.info("$account")
	call.respond(message = account)
}

6. Running Consul server

The implementation of our applications is almost finished. We just need to add the communication between two sample microservices. To achieve it we first need to run discovery server. Running Consul on the local machine is pretty easily using Docker container. Here’s the command that runs Consul in standalone mode, and exposes API on port 8500.

$ docker run -d --name=consul -e CONSUL_BIND_INTERFACE=eth0 -p 8500:8500 -p 8600:8600 consul

Consul provides web interface, which is available under address http://192.168.99.100:8500.

7. Custom Ktor feature for service discovery with Consul

Ktor Framework does not provide any components that implement typical microservice patterns like service discovery or distributed configuration. This library is in the early stage of development (still the current version is below 1.0), so such features will probably be implemented in the future. However, with Ktor we may easily implement a custom feature that is able to communicate with Consul. There are two types of features available for applications that use Ktor: server-side and client-side features. In that case we will implement a feature on the client side. It is responsible for intercepting an event of sending message by HTTP client, and including communication with Consul to the pipeline.
Before starting any implementation we need to include a set of Ktor client libraries and also a library providing methods for calling Consul API.

<dependency>
	<groupId>io.ktor</groupId>
	<artifactId>ktor-client-apache</artifactId>
	<version>${ktor.version}</version>
</dependency>
<dependency>
	<groupId>io.ktor</groupId>
	<artifactId>ktor-client-json</artifactId>
	<version>${ktor.version}</version>
</dependency>
<dependency>
	<groupId>io.ktor</groupId>
	<artifactId>ktor-client-jackson</artifactId>
	<version>${ktor.version}</version>
</dependency>
<dependency>
	<groupId>com.orbitz.consul</groupId>
	<artifactId>consul-client</artifactId>
	<version>1.2.3</version>
</dependency>

Let’s take a look on the implementation of ConsulFeature. It uses Consul client provided by consul-client. The default Consul address is http://localhost:8500, but it can be overridden on the calling side. I would not like to get into the implementation details of Ktor client feature. The most important thing for is the code inside install method. It is executed during Render phase, which is fired before Send phase. Our custom feature replaces an address provided as a hostname in the client’s URL by the address and port taken from Consul.

class ConsulFeature(var consulUrl: String) {

    class Config {
        var consulUrl: String = "http://localhost:8500"
        fun build(): ConsulFeature = ConsulFeature(consulUrl)
    }

    companion object Feature : HttpClientFeature<Config, ConsulFeature> {
        var currentNodeIndex: Int = 0

        override val key = AttributeKey<ConsulFeature>("ConsulFeature")

        override fun prepare(block: Config.() -> Unit): ConsulFeature = Config().apply(block).build()

        override fun install(feature: ConsulFeature, scope: HttpClient) {
            scope.requestPipeline.intercept(HttpRequestPipeline.Render) {
                var consulClient = Consul.builder().withUrl(feature.consulUrl).build()
                val nodes = consulClient.healthClient().getHealthyServiceInstances(context.url.host).response
                val selectedNode = nodes[currentNodeIndex]
                context.url.host = selectedNode.service.address
                context.url.port = selectedNode.service.port
                currentNodeIndex = (currentNodeIndex + 1) % nodes.size
                println("Calling ${selectedNode.service.id}: ${context.url.buildString()}")
            }
        }
    }
}

how it works? Everything should be clear for you after looking on that fragment of code containing HTTP client declaration, and usage of that client inside customer-service endpoint implementation. Instead of setting IP address in client’s URL I used the name of calling service – in that case account-service. This name is then replaced by ConsulFeature with an address and port taken for Consul server. If there is more than one instance of account-service registered in Consul server, the feature performs load balancing using typical Round Robin method.

val client = HttpClient(Apache) {
	install(ConsulFeature) {
		consulUrl = "http://192.168.99.100:8500"
	}
	install(JsonFeature)
}
// ...
routing {
	get("/customers/{id}") {
		val id: String? = call.parameters["id"]
		if (id != null) {
			val accounts = client.get&ltAccounts&gt("http://account-service/accounts/customer/$id")
			val customerRet = customer.copy(id = customer.id, name = customer.name)
			customerRet.accounts.addAll(accounts)
			call.respond(message = customerRet)
		}
	}
}

And the last thing. The application needs to register itself in Consul after startup. Here’s the main function of account-service. It uses the register method of Consul client. An unique id is automatically generated using application listen port number.

fun main(args: Array<String>) {
    val server = embeddedServer(Netty, commandLineEnvironment(args))
    val consulClient = Consul.builder().withUrl("http://192.168.99.100:8500").build()
    val service = ImmutableRegistration.builder()
            .id("account-${server.environment.connectors[0].port}")
            .name("account-service")
            .address("localhost")
            .port(server.environment.connectors[0].port)
            .build()
    consulClient.agentClient().register(service)

    server.start(wait = true)
}

8. How it works?

Ok, let’s run two instances of account-service and a single instance of customer-service. When running two instances of account-service we need to override default port number by declaring application running parameter -port=PORT_NUMBER.

ktor-2

Every instance of microservice should be registered in Consul after startup.

ktor-3

We are running instances of account-service. Here are the details.

ktor-4

Now, let’s add some test data by calling POST endpoints exposed by our microservices. As you can see I added new accounts only on the first instance of account-service. Because all the microservices stores data in-memory, those object will be stored only by instance running on port 8090.

$ curl -d '{"name":"John Smith"}' -H "Content-Type: application/json" -X POST http://localhost:8095/customers
$ curl -d '{"number":"1234567890", "balance":5000, "customerId":1}' -H "Content-Type: application/json" -X POST http://localhost:8090/accounts
$ curl -d '{"number":"1234567891", "balance":10000, "customerId":1}' -H "Content-Type: application/json" -X POST http://localhost:8090/accounts

If you call endpoint http://localhost:8095/customers/1 it tries to connect with running instances of account-service. Once it returns response {"id":1,"name":"John Smith","accounts":[{"id":1,"balance":5000,"number":"1234567890","customerId":1},{"id":2,"balance":10000,"number":"1234567891","customerId":1}]}, while the second time {"id":1,"name":"John Smith","accounts":[]}. That is the expected result. Because ConsulFeature load balances between two instances of account-service, when only the first instance stores data. Here’s the fragment of application logs.

ktor-5

Conclusion

In this article I presented how to use Ktor framework for building microservices architecture with Consul server. We have used some basic Ktor features like mechanisms for routing, logging, metrics, and some more advanced solutions for building our own feature that interacts with Consul during inter-service communication. Ktor seems to be very interesting framework. I will definitely follow a progress in development of this framework. The source code with sample application is available on GitHub in repository https://github.com/piomin/sample-kotlin-ktor-microservices.git.

Advertisements

Integration tests on OpenShift using Arquillian Cube and Istio

Building integration tests for applications deployed on Kubernetes/OpenShift platforms seems to be quite a big challenge. With Arquillian Cube, an Arquillian extension for managing Docker containers, it is not complicated. Kubernetes extension, being a part of Arquillian Cube, helps you write and run integration tests for your Kubernetes/Openshift application. It is responsible for creating and managing temporary namespace for your tests, applying all Kubernetes resources required to setup your environment and once everything is ready it will just run defined integration tests.
The one very good information related to Arquillian Cube is that it supports Istio framework. You can apply Istio resources before executing tests. One of the most important features of Istio is an ability to control of traffic behavior with rich routing rules, retries, delays, failovers, and fault injection. It allows you to test some unexpected situations during network communication between microservices like server errors or timeouts.
If you would like to run some tests using Istio resources on Minishift you should first install it on your platform. To do that you need to change some privileges for your OpenShift user. Let’s do that.

1. Enabling Istio on Minishift

Istio requires some high-level privileges to be able to run on OpenShift. To add those privileges to the current user we need to login as an user with cluster admin role. First, we should enable admin-user addon on Minishift by executing the following command.

$ minishift addons enable admin-user

After that you would be able to login as system:admin user, which has cluster-admin role. With this user you can also add cluster-admin role to other users, for example admin. Let’s do that.

$ oc login -u system:admin
$ oc adm policy add-cluster-role-to-user cluster-admin admin
$ oc login -u admin -p admin

Now, let’s create new project dedicated especially for Istio and then add some required privileges.

$ oc new-project istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-ingress-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z default -n istio-system
$ oc adm policy add-scc-to-user anyuid -z prometheus -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-egressgateway-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-citadel-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-ingressgateway-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-cleanup-old-ca-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-mixer-post-install-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-mixer-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-pilot-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-sidecar-injector-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-galley-service-account -n istio-system
$ oc adm policy add-scc-to-user privileged -z default -n myproject

Finally, we may proceed to Istio components installation. I downloaded the current newest version of Istio – 1.0.1. Installation file is available under install/kubernetes directory. You just have to apply it to your Minishift instance by calling oc apply command.

$ oc apply -f install/kubernetes/istio-demo.yaml

2. Enabling Istio for Arquillian Cube

I have already described how to use Arquillian Cube to run tests with OpenShift in the article Testing microservices on OpenShift using Arquillian Cube. In comparison with the sample described in that article we need to include dependency responsible for enabling Istio features.

<dependency>
	<groupId>org.arquillian.cube</groupId>
	<artifactId>arquillian-cube-istio-kubernetes</artifactId>
	<version>1.17.1</version>
	<scope>test</scope>
</dependency>

Now, we can use @IstioResource annotation to apply Istio resources into OpenShift cluster or IstioAssistant bean to be able to use some additional methods for adding, removing resources programmatically or polling an availability of URLs.
Let’s take a look on the following JUnit test class using Arquillian Cube with Istio support. In addition to the standard test created for running on OpenShift instance I have added Istio resource file customer-to-account-route.yaml. Then I have invoked method await provided by IstioAssistant. First test test1CustomerRoute creates new customer, so it needs to wait until customer-route is deployed on OpenShift. The next test test2AccountRoute adds account for the newly created customer, so it needs to wait until account-route is deployed on OpenShift. Finally, the test test3GetCustomerWithAccounts is ran, which calls the method responsible for finding customer by id with list of accounts. In that case customer-service calls method endpoint by account-service. As you have probably find out the last line of that test method contains an assertion to empty list of accounts: Assert.assertTrue(c.getAccounts().isEmpty()). Why? We will simulate the timeout in communication between customer-service and account-service using Istio rules.

@Category(RequiresOpenshift.class)
@RequiresOpenshift
@Templates(templates = {
        @Template(url = "classpath:account-deployment.yaml"),
        @Template(url = "classpath:deployment.yaml")
})
@RunWith(ArquillianConditionalRunner.class)
@IstioResource("classpath:customer-to-account-route.yaml")
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class IstioRuleTest {

    private static final Logger LOGGER = LoggerFactory.getLogger(IstioRuleTest.class);
    private static String id;

    @ArquillianResource
    private IstioAssistant istioAssistant;

    @RouteURL(value = "customer-route", path = "/customer")
    private URL customerUrl;
    @RouteURL(value = "account-route", path = "/account")
    private URL accountUrl;

    @Test
    public void test1CustomerRoute() {
        LOGGER.info("URL: {}", customerUrl);
        istioAssistant.await(customerUrl, r -> r.isSuccessful());
        LOGGER.info("URL ready. Proceeding to the test");
        OkHttpClient httpClient = new OkHttpClient();
        RequestBody body = RequestBody.create(MediaType.parse("application/json"), "{\"name\":\"John Smith\", \"age\":33}");
        Request request = new Request.Builder().url(customerUrl).post(body).build();
        try {
            Response response = httpClient.newCall(request).execute();
            ResponseBody b = response.body();
            String json = b.string();
            LOGGER.info("Test: response={}", json);
            Assert.assertNotNull(b);
            Assert.assertEquals(200, response.code());
            Customer c = Json.decodeValue(json, Customer.class);
            this.id = c.getId();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    @Test
    public  void test2AccountRoute() {
        LOGGER.info("Route URL: {}", accountUrl);
        istioAssistant.await(accountUrl, r -> r.isSuccessful());
        LOGGER.info("URL ready. Proceeding to the test");
        OkHttpClient httpClient = new OkHttpClient();
        RequestBody body = RequestBody.create(MediaType.parse("application/json"), "{\"number\":\"01234567890\", \"balance\":10000, \"customerId\":\"" + this.id + "\"}");
        Request request = new Request.Builder().url(accountUrl).post(body).build();
        try {
            Response response = httpClient.newCall(request).execute();
            ResponseBody b = response.body();
            String json = b.string();
            LOGGER.info("Test: response={}", json);
            Assert.assertNotNull(b);
            Assert.assertEquals(200, response.code());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    @Test
    public void test3GetCustomerWithAccounts() {
        String url = customerUrl + "/" + id;
        LOGGER.info("Calling URL: {}", customerUrl);
        OkHttpClient httpClient = new OkHttpClient();
        Request request = new Request.Builder().url(url).get().build();
        try {
            Response response = httpClient.newCall(request).execute();
            String json = response.body().string();
            LOGGER.info("Test: response={}", json);
            Assert.assertNotNull(response.body());
            Assert.assertEquals(200, response.code());
            Customer c = Json.decodeValue(json, Customer.class);
            Assert.assertTrue(c.getAccounts().isEmpty());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

}

3. Creating Istio rules

On of the interesting features provided by Istio is an availability of injecting faults to the route rules. we can specify one or more faults to inject while forwarding HTTP requests to the rule’s corresponding request destination. The faults can be either delays or aborts. We can define a percentage level of error using percent field for the both types of fault. In the following Istio resource I have defines 2 seconds delay for every single request sent to account-service.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: account-service
spec:
  hosts:
    - account-service
  http:
  - fault:
      delay:
        fixedDelay: 2s
        percent: 100
    route:
    - destination:
        host: account-service
        subset: v1

Besides VirtualService we also need to define DestinationRule for account-service. It is really simple – we have just define version label of the target service.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: account-service
spec:
  host: account-service
  subsets:
  - name: v1
    labels:
      version: v1

Before running the test we should also modify OpenShift deployment templates of our sample applications. We need to inject some Istio resources into the pods definition using istioctl kube-inject command as shown below.

$ istioctl kube-inject -f deployment.yaml -o customer-deployment-istio.yaml
$ istioctl kube-inject -f account-deployment.yaml -o account-deployment-istio.yaml

Finally, we may rewrite generated files into OpenShift templates. Here’s the fragment of Openshift template containing DeploymentConfig definition for account-service.

kind: Template
apiVersion: v1
metadata:
  name: account-template
objects:
  - kind: DeploymentConfig
    apiVersion: v1
    metadata:
      name: account-service
      labels:
        app: account-service
        name: account-service
        version: v1
    spec:
      template:
        metadata:
          annotations:
            sidecar.istio.io/status: '{"version":"364ad47b562167c46c2d316a42629e370940f3c05a9b99ccfe04d9f2bf5af84d","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
          name: account-service
          labels:
            app: account-service
            name: account-service
            version: v1
        spec:
          containers:
          - env:
            - name: DATABASE_NAME
              valueFrom:
                secretKeyRef:
                  key: database-name
                  name: mongodb
            - name: DATABASE_USER
              valueFrom:
                secretKeyRef:
                  key: database-user
                  name: mongodb
            - name: DATABASE_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: database-password
                  name: mongodb
            image: piomin/account-vertx-service
            name: account-vertx-service
            ports:
            - containerPort: 8095
            resources: {}
          - args:
            - proxy
            - sidecar
            - --configPath
            - /etc/istio/proxy
            - --binaryPath
            - /usr/local/bin/envoy
            - --serviceCluster
            - account-service
            - --drainDuration
            - 45s
            - --parentShutdownDuration
            - 1m0s
            - --discoveryAddress
            - istio-pilot.istio-system:15007
            - --discoveryRefreshDelay
            - 1s
            - --zipkinAddress
            - zipkin.istio-system:9411
            - --connectTimeout
            - 10s
            - --statsdUdpAddress
            - istio-statsd-prom-bridge.istio-system:9125
            - --proxyAdminPort
            - "15000"
            - --controlPlaneAuthPolicy
            - NONE
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: INSTANCE_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: ISTIO_META_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: ISTIO_META_INTERCEPTION_MODE
              value: REDIRECT
            image: gcr.io/istio-release/proxyv2:1.0.1
            imagePullPolicy: IfNotPresent
            name: istio-proxy
            resources:
              requests:
                cpu: 10m
            securityContext:
              readOnlyRootFilesystem: true
              runAsUser: 1337
            volumeMounts:
            - mountPath: /etc/istio/proxy
              name: istio-envoy
            - mountPath: /etc/certs/
              name: istio-certs
              readOnly: true
          initContainers:
          - args:
            - -p
            - "15001"
            - -u
            - "1337"
            - -m
            - REDIRECT
            - -i
            - '*'
            - -x
            - ""
            - -b
            - 8095,
            - -d
            - ""
            image: gcr.io/istio-release/proxy_init:1.0.1
            imagePullPolicy: IfNotPresent
            name: istio-init
            resources: {}
            securityContext:
              capabilities:
                add:
                - NET_ADMIN
          volumes:
          - emptyDir:
              medium: Memory
            name: istio-envoy
          - name: istio-certs
            secret:
              optional: true
              secretName: istio.default

4. Building applications

The sample applications are implemented using Eclipse Vert.x framework. They use Mongo database for storing data. The connection settings are injected into pods using Kubernetes Secrets.

public class MongoVerticle extends AbstractVerticle {

	private static final Logger LOGGER = LoggerFactory.getLogger(MongoVerticle.class);

	@Override
	public void start() throws Exception {
		ConfigStoreOptions envStore = new ConfigStoreOptions()
				.setType("env")
				.setConfig(new JsonObject().put("keys", new JsonArray().add("DATABASE_USER").add("DATABASE_PASSWORD").add("DATABASE_NAME")));
		ConfigRetrieverOptions options = new ConfigRetrieverOptions().addStore(envStore);
		ConfigRetriever retriever = ConfigRetriever.create(vertx, options);
		retriever.getConfig(r -> {
			String user = r.result().getString("DATABASE_USER");
			String password = r.result().getString("DATABASE_PASSWORD");
			String db = r.result().getString("DATABASE_NAME");
			JsonObject config = new JsonObject();
			LOGGER.info("Connecting {} using {}/{}", db, user, password);
			config.put("connection_string", "mongodb://" + user + ":" + password + "@mongodb/" + db);
			final MongoClient client = MongoClient.createShared(vertx, config);
			final CustomerRepository service = new CustomerRepositoryImpl(client);
			ProxyHelper.registerService(CustomerRepository.class, vertx, service, "customer-service");	
		});
	}
}

MongoDB should be started on OpenShift before starting any applications, which connect to it. To achieve it we should insert Mongo deployment resource into Arquillian configuration file as env.config.resource.name field.
The configuration of Arquillian Cube is visible below. We will use an existing namespace myproject, which has already granted the required privileges (see Step 1). We also need to pass authentication token of user admin. You can collect it using command oc whoami -t after login to OpenShift cluster.

<extension qualifier="openshift">
	<property name="namespace.use.current">true</property>
	<property name="namespace.use.existing">myproject</property>
	<property name="kubernetes.master">https://192.168.99.100:8443</property>
	<property name="cube.auth.token">TYYccw6pfn7TXtH8bwhCyl2tppp5MBGq7UXenuZ0fZA</property>
	<property name="env.config.resource.name">mongo-deployment.yaml</property>
</extension>

The communication between customer-service and account-service is realized by Vert.x WebClient. We will set read timeout for the client to 1 second. Because Istio injects 2 seconds delay into the route, the communication is going to end with timeout.

public class AccountClient {

	private static final Logger LOGGER = LoggerFactory.getLogger(AccountClient.class);
	private Vertx vertx;

	public AccountClient(Vertx vertx) {
		this.vertx = vertx;
	}
	
	public AccountClient findCustomerAccounts(String customerId, Handler<AsyncResult<List>> resultHandler) {
		WebClient client = WebClient.create(vertx);
		client.get(8095, "account-service", "/account/customer/" + customerId).timeout(1000).send(res2 -> {
			if (res2.succeeded()) {
				LOGGER.info("Response: {}", res2.result().bodyAsString());
				List accounts = res2.result().bodyAsJsonArray().stream().map(it -> Json.decodeValue(it.toString(), Account.class)).collect(Collectors.toList());
				resultHandler.handle(Future.succeededFuture(accounts));
			} else {
				resultHandler.handle(Future.succeededFuture(new ArrayList()));
			}
		});
		return this;
	}
}

The full code of sample applications is available on GitHub in the repository https://github.com/piomin/sample-vertx-kubernetes/tree/openshift-istio-tests.

5. Running tests

You can the tests during Maven build or just using your IDE. As the first test1CustomerRoute test is executed. It adds new customer and save generated id for two next tests.

arquillian-istio-3

The next test is test2AccountRoute. It adds an account for the customer created during previous test.

arquillian-istio-2

Finally, the test responsible for verifying communication between microservices is running. It verifies if the list of accounts is empty, what is a result of timeout in communication with account-service.

arquillian-istio-1

Testing Microservices: Tools and Frameworks

There are some key challenges around microservices architecture testing that we are facing. The selection of right tools is one of that elements that helps us deal with the issues related to those challenges. First, let’s identify the most important elements involved into the process of microservices testing. These are some of them:

  • Teams coordination – with many independent teams managing their own microservices, it becomes very challenging to coordinate the overall process of software development and testing
  • Complexity – there are many microservices that communicate to each other. We need to ensure that every one of them is working properly and is resistant to the slow responses or failures from other microservices
  • Performance – since there are many independent services it is important to test the whole architecture under traffic close to the production

Let’s discuss some interesting frameworks helping that may help you in testing microservice-based architecture.

Components tests with Hoverfly

Hoverfly simulation mode may be especially useful for building component tests. During component tests we are verifying the whole microservice without communication over network with other microservices or external datastores. The following picture shows how such a test is performed for our sample microservice.

testing-microservices-1

Hoverfly provides simple DSL for creating simulations, and a JUnit integration for using it within JUnit tests. It may orchestrated via JUnit @Rule. We are simulating two services and then overriding Ribbon properties to resolve address of these services by client name. We should also disable communication with Eureka discovery by disabling registration after application boot or fetching list of services for Ribbon client. Hoverfly simulates responses for PUT and GET methods exposed by passenger-management and driver-management microservices. Controller is the main component that implements business logic in our application. It store data using in-memory repository component and communicates with other microservices through @FeignClient interfaces. By testing three methods implemented by the controller we are testing the whole business logic implemented inside trip-management service.

@SpringBootTest(properties = {
        "eureka.client.enabled=false",
        "ribbon.eureka.enable=false",
        "passenger-management.ribbon.listOfServers=passenger-management",
        "driver-management.ribbon.listOfServers=driver-management"
})
@RunWith(SpringRunner.class)
@AutoConfigureMockMvc
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class TripComponentTests {

    ObjectMapper mapper = new ObjectMapper();

    @Autowired
    MockMvc mockMvc;

    @ClassRule
    public static HoverflyRule rule = HoverflyRule.inSimulationMode(SimulationSource.dsl(
            HoverflyDsl.service("passenger-management:80")
                    .get(HoverflyMatchers.startsWith("/passengers/login/"))
                    .willReturn(ResponseCreators.success(HttpBodyConverter.jsonWithSingleQuotes("{'id':1,'name':'John Walker'}")))
                    .put(HoverflyMatchers.startsWith("/passengers")).anyBody()
                    .willReturn(ResponseCreators.success(HttpBodyConverter.jsonWithSingleQuotes("{'id':1,'name':'John Walker'}"))),
            HoverflyDsl.service("driver-management:80")
                    .get(HoverflyMatchers.startsWith("/drivers/"))
                    .willReturn(ResponseCreators.success(HttpBodyConverter.jsonWithSingleQuotes("{'id':1,'name':'David Smith','currentLocationX': 15,'currentLocationY':25}")))
                    .put(HoverflyMatchers.startsWith("/drivers")).anyBody()
                    .willReturn(ResponseCreators.success(HttpBodyConverter.jsonWithSingleQuotes("{'id':1,'name':'David Smith','currentLocationX': 15,'currentLocationY':25}")))
    )).printSimulationData();

    @Test
    public void test1CreateNewTrip() throws Exception {
        TripInput ti = new TripInput("test", 10, 20, "walker");
        mockMvc.perform(MockMvcRequestBuilders.post("/trips")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(ti)))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("NEW")))
                .andExpect(MockMvcResultMatchers.jsonPath("$.driverId", Matchers.any(Integer.class)));
    }

    @Test
    public void test2CancelTrip() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.put("/trips/cancel/1")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(new Trip())))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("IN_PROGRESS")))
                .andExpect(MockMvcResultMatchers.jsonPath("$.driverId", Matchers.any(Integer.class)));
    }

    @Test
    public void test3PayTrip() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.put("/trips/payment/1")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(new Trip())))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("PAYED")));
    }

}

The tests visible above verify only positive scenarios. What about testing some unexpected behaviour like network delays or server errors? With Hoverfly we can easily simulate such a behaviour and define some negative scenarios. In the following fragment of code I have defined three scenarios. In the first of them target service has been delayed 2 seconds. In order to simulate timeout on the client side I had to change default readTimeout for Ribbon load balancer and then disabled Hystrix circuit breaker for Feign client. The second test simulates HTTP 500 response status from passenger-management service. The last scenario assumes empty response from method responsible for searching the nearest driver.

@SpringBootTest(properties = {
        "eureka.client.enabled=false",
        "ribbon.eureka.enable=false",
        "passenger-management.ribbon.listOfServers=passenger-management",
        "driver-management.ribbon.listOfServers=driver-management",
        "feign.hystrix.enabled=false",
        "ribbon.ReadTimeout=500"
})
@RunWith(SpringRunner.class)
@AutoConfigureMockMvc
public class TripNegativeComponentTests {

    private ObjectMapper mapper = new ObjectMapper();
    @Autowired
    private MockMvc mockMvc;

    @ClassRule
    public static HoverflyRule rule = HoverflyRule.inSimulationMode(SimulationSource.dsl(
            HoverflyDsl.service("passenger-management:80")
                    .get("/passengers/login/test1")
                    .willReturn(ResponseCreators.success(HttpBodyConverter.jsonWithSingleQuotes("{'id':1,'name':'John Smith'}")).withDelay(2000, TimeUnit.MILLISECONDS))
                    .get("/passengers/login/test2")
                    .willReturn(ResponseCreators.success(HttpBodyConverter.jsonWithSingleQuotes("{'id':1,'name':'John Smith'}")))
                    .get("/passengers/login/test3")
                    .willReturn(ResponseCreators.serverError()),
            HoverflyDsl.service("driver-management:80")
                    .get(HoverflyMatchers.startsWith("/drivers/"))
                    .willReturn(ResponseCreators.success().body("{}"))
            ));

    @Test
    public void testCreateTripWithTimeout() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.post("/trips").contentType(MediaType.APPLICATION_JSON).content(mapper.writeValueAsString(new TripInput("test", 15, 25, "test1"))))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.nullValue()))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("REJECTED")));
    }

    @Test
    public void testCreateTripWithError() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.post("/trips").contentType(MediaType.APPLICATION_JSON).content(mapper.writeValueAsString(new TripInput("test", 15, 25, "test3"))))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.nullValue()))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("REJECTED")));
    }

    @Test
    public void testCreateTripWithNoDrivers() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.post("/trips").contentType(MediaType.APPLICATION_JSON).content(mapper.writeValueAsString(new TripInput("test", 15, 25, "test2"))))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.nullValue()))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("REJECTED")));
    }

}

All the timeouts and errors in communication with external microservices are handled by the bean annotated with @ControllerAdvice. In such cases trip-management microservice should not return server error response, but 200 OK with JSON response containing field status equals to REJECTED.

@ControllerAdvice
public class TripControllerErrorHandler extends ResponseEntityExceptionHandler {

    @ExceptionHandler({RetryableException.class, FeignException.class})
    protected ResponseEntity handleFeignTimeout(RuntimeException ex, WebRequest request) {
        Trip trip = new Trip();
        trip.setStatus(TripStatus.REJECTED);
        return handleExceptionInternal(ex, trip, null, HttpStatus.OK, request);
    }

}

Contract tests with Pact

The next type of test strategy usually implemented for microservices-based architecture is consumer-driven contract testing. In fact, there are some tools especially dedicated for such type of tests. One of them is Pact. Contract testing is a way to ensure that services can communicate with each other without implementing integration tests. A contract is signed between two sides of communication: consumer and provider. Pact assumes that contract code is generated and published on the consumer side, and than verified by the provider.

Pact provides tool that can store and share the contracts between consumers and providers. It is called Pact Broker. It exposes a simple RESTful API for publishing and retrieving pacts, and embedded web dashboard for navigating the API. We can easily run Pact Broker on the local machine using its Docker image.

micro-testing-2

We will begin from running Pact Broker. Pact Broker requires running instance of postgresql, so first we have to launch it using Docker image, and then link our broker container with that container.

docker run -d --name postgres -p 5432:5432 -e POSTGRES_USER=oauth -e POSTGRES_PASSWORD=oauth123 -e POSTGRES_DB=oauth postgres
docker run -d --name pact-broker --link postgres:postgres -e PACT_BROKER_DATABASE_USERNAME=oauth -e PACT_BROKER_DATABASE_PASSWORD=oauth123 -e PACT_BROKER_DATABASE_HOST=postgres -e PACT_BROKER_DATABASE_NAME=oauth -p 9080:80 dius/pact-broker

The next step is to implement contract tests on the consumer side. We will use JVM implementation of Pact library for that. It provides PactProviderRuleMk2 object responsible for creating stubs of the provider service. We should annotate it with JUnit @Rule. Ribbon will forward all requests to passenger-management to the stub address – in that case localhost:8180. Pact JVM supports annotations and provides DSL for building test scenarios. Test method responsible for generating contract data should be annotated with @Pact. It is important to set fields state and provider, because then generated contract would be verified on the provider side using these names. Generated pacts are verified inside the same test class by the methods annotated with @PactVerification. Field fragment points to the name of the method responsible for generating pact inside the same test class. The contract is tested using PassengerManagementClient @FeignClient.

@RunWith(SpringRunner.class)
@SpringBootTest(properties = {
        "driver-management.ribbon.listOfServers=localhost:8190",
        "passenger-management.ribbon.listOfServers=localhost:8180",
        "ribbon.eureka.enabled=false",
        "eureka.client.enabled=false",
        "ribbon.ReadTimeout=5000"
})
public class PassengerManagementContractTests {

    @Rule
    public PactProviderRuleMk2 stubProvider = new PactProviderRuleMk2("passengerManagementProvider", "localhost", 8180, this);
    @Autowired
    private PassengerManagementClient passengerManagementClient;

    @Pact(state = "get-passenger", provider = "passengerManagementProvider", consumer = "passengerManagementClient")
    public RequestResponsePact callGetPassenger(PactDslWithProvider builder) {
        DslPart body = new PactDslJsonBody().integerType("id").stringType("name").numberType("balance").close();
        return builder.given("get-passenger").uponReceiving("test-get-passenger")
                .path("/passengers/login/test").method("GET").willRespondWith().status(200).body(body).toPact();
    }

    @Pact(state = "update-passenger", provider = "passengerManagementProvider", consumer = "passengerManagementClient")
    public RequestResponsePact callUpdatePassenger(PactDslWithProvider builder) {
        return builder.given("update-passenger").uponReceiving("test-update-passenger")
                .path("/passengers").method("PUT").bodyWithSingleQuotes("{'id':1,'amount':1000}", "application/json").willRespondWith().status(200)
                .bodyWithSingleQuotes("{'id':1,'name':'Adam Smith','balance':5000}", "application/json").toPact();
    }

    @Test
    @PactVerification(fragment = "callGetPassenger")
    public void verifyGetPassengerPact() {
        Passenger passenger = passengerManagementClient.getPassenger("test");
        Assert.assertNotNull(passenger);
        Assert.assertNotNull(passenger.getId());
    }

    @Test
    @PactVerification(fragment = "callUpdatePassenger")
    public void verifyUpdatePassengerPact() {
        Passenger passenger = passengerManagementClient.updatePassenger(new PassengerInput(1L, 1000));
        Assert.assertNotNull(passenger);
        Assert.assertNotNull(passenger.getId());
    }

}

Just running the tests is not enough. We also have to publish pacts generated during tests to Pact Broker. In order to achieve it we have to include the following Maven plugin to our pom.xml and then execute command mvn clean install pact:publish.

<plugin>
	<groupId>au.com.dius</groupId>
	<artifactId>pact-jvm-provider-maven_2.12</artifactId>
	<version>3.5.21</version>
	<configuration>
		<pactBrokerUrl>http://192.168.99.100:9080</pactBrokerUrl>
	</configuration>
</plugin>

Pact provides support for Spring on the provider side. Thanks to that we may use MockMvc controllers or inject properties from application.yml into the test class. Here’s dependency declaration that has to be included to our pom.xml

<dependency>
	<groupId>au.com.dius</groupId>
	<artifactId>pact-jvm-provider-spring_2.12</artifactId>
	<version>3.5.21</version>
	<scope>test</scope>
</dependency>

Now , the contract is being verified on the provider side. We need to pass provider name inside @Provider annotation and name of states for every verification test inside @State. These values has been during the tests on the consumer side inside @Pact annotation (fields state and provider).

@RunWith(SpringRestPactRunner.class)
@Provider("passengerManagementProvider")
@PactBroker
public class PassengerControllerContractTests {

    @InjectMocks
    private PassengerController controller = new PassengerController();
    @Mock
    private PassengerRepository repository;
    @TestTarget
    public final MockMvcTarget target = new MockMvcTarget();

    @Before
    public void before() {
        MockitoAnnotations.initMocks(this);
        target.setControllers(controller);
    }

    @State("get-passenger")
    public void testGetPassenger() {
        target.setRunTimes(3);
        Mockito.when(repository.findByLogin(Mockito.anyString()))
                .thenReturn(new Passenger(1L, "Adam Smith", "test", 4000))
                .thenReturn(new Passenger(3L, "Tom Hamilton", "hamilton", 400000))
                .thenReturn(new Passenger(5L, "John Scott", "scott", 222));
    }

    @State("update-passenger")
    public void testUpdatePassenger() {
        target.setRunTimes(1);
        Passenger passenger = new Passenger(1L, "Adam Smith", "test", 4000);
        Mockito.when(repository.findById(1L)).thenReturn(passenger);
        Mockito.when(repository.update(Mockito.any(Passenger.class)))
                .thenReturn(new Passenger(1L, "Adam Smith", "test", 5000));
    }
}

Pact Broker host and port are injected from application.yml file.

pactbroker:
  host: "192.168.99.100"
  port: "8090"

Performance tests with Gatling

An important step of testing microservices before deploying them on production is performance testing. One of interesting tools in this area is Gatling. It is highly capable load testing tool written in Scala. It means that we also have to use Scala DSL in order to build test scenarios. Let’s begin from adding required library to pom.xml file.

<dependency>
	<groupId>io.gatling.highcharts</groupId>
	<artifactId>gatling-charts-highcharts</artifactId>
	<version>2.3.1</version>
</dependency>

Now, we may proceed to the test. In the scenario visible above we are testing two endpoints exposed by trip-management: POST /trips and PUT /trips/payment/${tripId}. In fact, this scenario verifies the whole functionality of our sample system, where we are setting up trip and then pay for it after finish.
Every test class using Gatling needs to extend Simulation class. We are defining scenario using scenario method and then setting its name. We may define multiple executions inside single scenario. After every execution of POST /trips method test save generated id returned by the service. Then it inserts that id into the URL used for calling method PUT /trips/payment/${tripId}. Every single test expects response with 200 OK status.
Gatling provides two interesting features, which are worth mentioning. You can see how they are used in the following performance test. First of them is feeder. It is used for polling records and injecting their content into the test. Feed rPassengers selects one of five defined logins randomly. The final test result may be verified using Assertions API. It is responsible for verifying global statistics like response time or number of failed requests matches expectations for a whole simulation. In the scenario visible below the criterium is max response time that needs to be lower 100 milliseconds.

class CreateAndPayTripPerformanceTest extends Simulation {

  val rPassengers = Iterator.continually(Map("passenger" -> List("walker","smith","hamilton","scott","holmes").lift(Random.nextInt(5)).get))

  val scn = scenario("CreateAndPayTrip").feed(rPassengers).repeat(100, "n") {
    exec(http("CreateTrip-API")
      .post("http://localhost:8090/trips")
      .header("Content-Type", "application/json")
      .body(StringBody("""{"destination":"test${n}","locationX":${n},"locationY":${n},"username":"${passenger}"}"""))
      .check(status.is(200), jsonPath("$.id").saveAs("tripId"))
    ).exec(http("PayTrip-API")
      .put("http://localhost:8090/trips/payment/${tripId}")
      .header("Content-Type", "application/json")
      .check(status.is(200))
    )
  }

  setUp(scn.inject(atOnceUsers(20))).maxDuration(FiniteDuration.apply(5, TimeUnit.MINUTES))
    .assertions(global.responseTime.max.lt(100))

}

In order to run Gatling performance test you need to include the following Maven plugin to your pom.xml. You may run a single scenario or run multiple scenarios. After including the plugin you only need to execute command mvn clean gatling:test.

<plugin>
	<groupId>io.gatling</groupId>
	<artifactId>gatling-maven-plugin</artifactId>
	<version>2.2.4</version>
	<configuration>
		<simulationClass>pl.piomin.performance.tests.CreateAndPayTripPerformanceTest</simulationClass>
	</configuration>
</plugin>

Here are some diagrams illustrating result of performance tests for our microservice. Because maximum response time has been greater than set inside assertion (100ms), the test has failed.

microservices-testing-2

and …

microservices-testing-3

Summary

The right selection of tools is not the most important element phase of microservices testing. However, right tools can help you facing the key challenges related to it. Hoverfly allows to create full component tests that verifies if your microservice is able to handle delays or error from downstream services. Pact helps you to organize team by sharing and verifying contracts between independently developed microservices. Finally, Gatling can help you implementing load tests for selected scenarios, in order to verify an end-to-end performance of your system.
The source code used as a demo for this article is available on GitHub: https://github.com/piomin/sample-testing-microservices.git. If you find this article interesting for you you may be also interested in some other articles related to this subject:

GraphQL – The Future of Microservices?

Often, GraphQL is presented as a revolutionary way of designing web APIs in comparison to REST. However, if you would take a closer look on that technology you will see that there are so many differences between them. GraphQL is a relatively new solution that has been open sourced by Facebook in 2015. Today, REST is still the most popular paradigm used for exposing APIs and inter-service communication between microservices. Is GraphQL going to overtake REST in the future? Let’s take a look how to create microservices communicating through GraphQL API using Spring Boot and Apollo client.

Let’s begin from an architecture of our sample system. We have three microservices that communicates to each other using URLs taken from Eureka service discovery.

graphql-arch

1. Enabling Spring Boot support for GraphQL

We can easily enable support for GraphQL on the server-side Spring Boot application just by including some starters. After including graphql-spring-boot-starter the GraphQL servlet would be automatically accessible under path /graphql. We can override that default path by settings property graphql.servlet.mapping in application.yml file. We should also enable GraphiQL – an in-browser IDE for writing, validating, and testing GraphQL queries, and GraphQL Java Tools library, which contains useful components for creating queries and mutations. Thanks to that library any files on the classpath with .graphqls extension will be used to provide the schema definition.

<dependency>
	<groupId>com.graphql-java</groupId>
	<artifactId>graphql-spring-boot-starter</artifactId>
	<version>5.0.2</version>
</dependency>
<dependency>
	<groupId>com.graphql-java</groupId>
	<artifactId>graphiql-spring-boot-starter</artifactId>
	<version>5.0.2</version>
</dependency>
<dependency>
	<groupId>com.graphql-java</groupId>
	<artifactId>graphql-java-tools</artifactId>
	<version>5.2.3</version>
</dependency>

2. Building GraphQL schema definition

Every schema definitions contains data types declaration, relationships between them, and a set of operations including queries for searching objects and mutations for creating, updating or deleting data. Usually we will start from creating type declaration, which is responsible for domain object definition. You can specify if the field is required using ! char or if it is an array using [...]. Definition has to contain type declaration or reference to other types available in the specification.

type Employee {
  id: ID!
  organizationId: Int!
  departmentId: Int!
  name: String!
  age: Int!
  position: String!
  salary: Int!
}

Here’s an equivalent Java class to GraphQL definition visible above. GraphQL type Int can be also mapped to Java Long. The ID scalar type represents a unique identifier – in that case it also would be Java Long.

public class Employee {

	private Long id;
	private Long organizationId;
	private Long departmentId;
	private String name;
	private int age;
	private String position;
	private int salary;
	
	// constructor
	
	// getters
	// setters
	
}

The next part of schema definition contains queries and mutations declaration. Most of the queries return list of objects – what is marked with [Employee]. Inside EmployeeQueries type we have declared all find methods, while inside EmployeeMutations type methods for adding, updating and removing employees. If you pass the whole object to that method you need to declare it as an input type.

schema {
  query: EmployeeQueries
  mutation: EmployeeMutations
}

type EmployeeQueries {
  employees: [Employee]
  employee(id: ID!): Employee!
  employeesByOrganization(organizationId: Int!): [Employee]
  employeesByDepartment(departmentId: Int!): [Employee]
}

type EmployeeMutations {
  newEmployee(employee: EmployeeInput!): Employee
  deleteEmployee(id: ID!) : Boolean
  updateEmployee(id: ID!, employee: EmployeeInput!): Employee
}

input EmployeeInput {
  organizationId: Int
  departmentId: Int
  name: String
  age: Int
  position: String
  salary: Int
}

3. Queries and mutation implementation

Thanks to GraphQL Java Tools and Spring Boot GraphQL auto-configuration we don’t need to do much to implement queries and mutations in our application. The EmployeesQuery bean has to GraphQLQueryResolver interface. Basing on that Spring would be able to automatically detect and call right method as a response to one of the GraphQL query declared inside the schema. Here’s a class containing an implementation of queries.

@Component
public class EmployeeQueries implements GraphQLQueryResolver {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeQueries.class);
	
	@Autowired
	EmployeeRepository repository;
	
	public List employees() {
		LOGGER.info("Employees find");
		return repository.findAll();
	}
	
	public List employeesByOrganization(Long organizationId) {
		LOGGER.info("Employees find: organizationId={}", organizationId);
		return repository.findByOrganization(organizationId);
	}

	public List employeesByDepartment(Long departmentId) {
		LOGGER.info("Employees find: departmentId={}", departmentId);
		return repository.findByDepartment(departmentId);
	}
	
	public Employee employee(Long id) {
		LOGGER.info("Employee find: id={}", id);
		return repository.findById(id);
	}
	
}

If you would like to call, for example method employee(Long id) you should build the following query. You can easily test it in your application using GraphiQL tool available under path /graphiql.

graphql-1
The bean responsible for implementation of mutation methods needs to implement GraphQLMutationResolver. Despite declaration of EmployeeInput we still to use the same domain object as returned by queries – Employee.

@Component
public class EmployeeMutations implements GraphQLMutationResolver {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeQueries.class);
	
	@Autowired
	EmployeeRepository repository;
	
	public Employee newEmployee(Employee employee) {
		LOGGER.info("Employee add: employee={}", employee);
		return repository.add(employee);
	}
	
	public boolean deleteEmployee(Long id) {
		LOGGER.info("Employee delete: id={}", id);
		return repository.delete(id);
	}
	
	public Employee updateEmployee(Long id, Employee employee) {
		LOGGER.info("Employee update: id={}, employee={}", id, employee);
		return repository.update(id, employee);
	}
	
}

We can also use GraphiQL to test mutations. Here’s the command that adds new employee, and receives response with employee’s id and name.

graphql-2

4. Generating client-side classes

Ok, we have successfully created server-side application. We have already tested some queries using GraphiQL. But our main goal is to create some other microservices that communicate with employee-service application through GraphQL API. Here the most of tutorials about Spring Boot and GraphQL ending.
To be able to communicate with our first application through GraphQL API we have two choices. We can get a standard REST client and implement GraphQL API by ourselves with HTTP GET requests or use one of existing Java clients. Surprisingly, there are no many GraphQL Java client implementations available. The most serious choice is Apollo GraphQL Client for Android. Of course it is not designed only for Android devices, and you can successfully use it in your microservice Java application.
Before using the client we need to generate classes from schema and .grapql files. The recommended way to do it is through Apollo Gradle Plugin. There are also some Maven plugins, but none of them provide the level of automation as Gradle plugin, for example it automatically downloads node.js required for generating client-side classes. So, the first step is to add Apollo plugin and runtime to the project dependencies.

buildscript {
  repositories {
    jcenter()
    maven { url 'https://oss.sonatype.org/content/repositories/snapshots/' }
  }
  dependencies {
    classpath 'com.apollographql.apollo:apollo-gradle-plugin:1.0.1-SNAPSHOT'
  }
}

apply plugin: 'com.apollographql.android'

dependencies {
  compile 'com.apollographql.apollo:apollo-runtime:1.0.1-SNAPSHOT'
}

GraphQL Gradle plugin tries to find files with .graphql extension and schema.json inside src/main/graphql directory. GraphQL JSON schema can be obtained from your Spring Boot application by calling resource /graphql/schema.json. File .graphql contains queries definition. Query employeesByOrganization will be called by organization-service, while employeesByDepartment by both department-service and organization-service. Those two application needs a little different set of data in the response. Application department-service requires more detailed information about every employee than organization-service. GraphQL is an excellent solution in that case, because we can define the require set of data in the response on the client side. Here’s query definition of employeesByOrganization called by organization-service.

query EmployeesByOrganization($organizationId: Int!) {
  employeesByOrganization(organizationId: $organizationId) {
    id
    name
  }
}

Application organization-service would also call employeesByDepartment query.

query EmployeesByDepartment($departmentId: Int!) {
  employeesByDepartment(departmentId: $departmentId) {
    id
    name
  }
}

The query employeesByDepartment is also called by department-service, which requires not only id and name fields, but also position and salary.

query EmployeesByDepartment($departmentId: Int!) {
  employeesByDepartment(departmentId: $departmentId) {
    id
    name
    position
    salary
  }
}

All the generated classes are available under build/generated/source/apollo directory.

5. Building Apollo client with discovery

After generating all required classes and including them into calling microservices we may proceed to the client implementation. Apollo client has two important features that will affect our development:

  • It provides only asynchronous methods based on callback
  • It does not integrate with service discovery based on Spring Cloud Netflix Eureka

Here’s an implementation of employee-service client inside department-service. I used EurekaClient directly (1). It gets all running instances registered as EMPLOYEE-SERVICE. Then it selects one instance form the list of available instances randomly (2). The port number of that instance is passed to ApolloClient (3). Before calling asynchronous method enqueue provided by ApolloClient we create lock (4), which waits max. 5 seconds for releasing (8). Method enqueue returns response in the callback method onResponse (5). We map the response body from GraphQL Employee object to returned object (6) and then release the lock (7).

@Component
public class EmployeeClient {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeClient.class);
	private static final int TIMEOUT = 5000;
	private static final String SERVICE_NAME = "EMPLOYEE-SERVICE"; 
	private static final String SERVER_URL = "http://localhost:%d/graphql";
	
	Random r = new Random();
	
	@Autowired
	private EurekaClient discoveryClient; // (1)
	
	public List findByDepartment(Long departmentId) throws InterruptedException {
		List employees = new ArrayList();
		Application app = discoveryClient.getApplication(SERVICE_NAME); // (2)
		InstanceInfo ii = app.getInstances().get(r.nextInt(app.size()));
		ApolloClient client = ApolloClient.builder().serverUrl(String.format(SERVER_URL, ii.getPort())).build(); // (3)
		CountDownLatch lock = new CountDownLatch(1); // (4)
		client.query(EmployeesByDepartmentQuery.builder().build()).enqueue(new Callback() {

			@Override
			public void onFailure(ApolloException ex) {
				LOGGER.info("Err: {}", ex);
				lock.countDown();
			}

			@Override
			public void onResponse(Response res) { // (5)
				LOGGER.info("Res: {}", res);
				employees.addAll(res.data().employees().stream().map(emp -> new Employee(Long.valueOf(emp.id()), emp.name(), emp.position(), emp.salary())).collect(Collectors.toList())); // (6)
				lock.countDown(); // (7)
			}

		});
		lock.await(TIMEOUT, TimeUnit.MILLISECONDS); // (8)
		return employees;
	}
	
}

Finally, EmployeeClient is injected into the query resolver class – DepartmentQueries, and used inside query departmentsByOrganizationWithEmployees.

@Component
public class DepartmentQueries implements GraphQLQueryResolver {

	private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentQueries.class);
	
	@Autowired
	EmployeeClient employeeClient;
	@Autowired
	DepartmentRepository repository;

	public List departmentsByOrganizationWithEmployees(Long organizationId) {
		LOGGER.info("Departments find: organizationId={}", organizationId);
		List departments = repository.findByOrganization(organizationId);
		departments.forEach(d -> {
			try {
				d.setEmployees(employeeClient.findByDepartment(d.getId()));
			} catch (InterruptedException e) {
				LOGGER.error("Error calling employee-service", e);
			}
		});
		return departments;
	}
	
	// other queries
	
}

Before calling target query we should take a look on the schema created for department-service. Every Department object can contain the list of assigned employees, so we also define type Employee referenced by Department type.

schema {
  query: DepartmentQueries
  mutation: DepartmentMutations
}

type DepartmentQueries {
  departments: [Department]
  department(id: ID!): Department!
  departmentsByOrganization(organizationId: Int!): [Department]
  departmentsByOrganizationWithEmployees(organizationId: Int!): [Department]
}

type DepartmentMutations {
  newDepartment(department: DepartmentInput!): Department
  deleteDepartment(id: ID!) : Boolean
  updateDepartment(id: ID!, department: DepartmentInput!): Department
}

input DepartmentInput {
  organizationId: Int!
  name: String!
}

type Department {
  id: ID!
  organizationId: Int!
  name: String!
  employees: [Employee]
}

type Employee {
  id: ID!
  name: String!
  position: String!
  salary: Int!
}

Now, we can call our test query with list of required fields using GraphiQL. An application department-service is by default available under port 8091, so we may call it using address http://localhost:8091/graphiql.

graphql-3

Conclusion

GraphQL seems to be an interesting alternative to standard REST APIs. However, we should not consider it as a replacement to REST. There are some use cases where GraphQL may be better choice, and some use cases where REST is better choice. If your clients does not need the full set of fields returned by the server side, and moreover you have many clients with different requirements to the single endpoint – GraphQL is a good choice. When it comes to microservices there are no solutions based on Java that allow you to use GraphQL together with service discovery, load balancing or API gateway out-of-the-box. In this article I have shown an example of usage Apollo GraphQL client together with Spring Cloud Eureka for inter-service communication. Sample applications source code is available on GitHub https://github.com/piomin/sample-graphql-microservices.git.

Continuous Integration with Jenkins, Artifactory and Spring Cloud Contract

Consumer Driven Contract (CDC) testing is one of the method that allows you to verify integration between applications within your system. The number of such interactions may be really large especially if you maintain microservices-based architecture. Assuming that every microservice is developed by different teams or sometimes even different vendors, it is important to automate the whole testing process. As usual, we can use Jenkins server for running contract tests within our Continuous Integration (CI) process.

The sample scenario has been visualized on the picture below. We have one application (person-service) that exposes API leveraged by three different applications. Each application is implementing by a different development team. Consequently, every application is stored in the separated Git repository and has dedicated pipeline in Jenkins for building, testing and deploying.

contracts-3 (1)

The source code of sample applications is available on GitHub in the repository sample-spring-cloud-contract-ci (https://github.com/piomin/sample-spring-cloud-contract-ci.git). I placed all the sample microservices in the single Git repository only for our demo simplification. We will still treat them as a separated microservices, developed and built independently.

In this article I used Spring Cloud Contract for CDC implementation. It is the first choice solution for JVM applications written in Spring Boot. Contracts can be defined using Groovy or YAML notation. After building on the producer side Spring Cloud Contract generate special JAR file with stubs suffix, that contains all defined contracts and JSON mappings. Such a JAR file can be build on Jenkins and then published on Artifactory. Contract consumer also use the same Artifactory server, so they can use the latest version of stubs file. Because every application expects different response from person-service, we have to define three different contracts between person-service and a target consumer.

contracts-1

Let’s analyze the sample scenario. Assuming we have performed some changes in the API exposed by person-service and we have modified contracts on the producer side, we would like to publish them on shared server. First, we need to verify contracts against producer (1), and in case of success publish artifact with stubs to Artifactory (2). All the pipelines defined for applications that use this contract are able to trigger the build on a new version of JAR file with stubs (3). Then, the newest version contract is verifying against consumer (4). If contract testing fails, pipeline is able to notify the responsible team about this failure.

contracts-2

1. Pre-requirements

Before implementing and running any sample we need to prepare our environment. We need to launch Jenkins and Artifactory servers on the local machine. The most suitable way for this is through a Docker containers. Here are the commands required for run these containers.

$ docker run --name artifactory -d -p 8081:8081 docker.bintray.io/jfrog/artifactory-oss:latest
$ docker run --name jenkins -d -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts

I don’t know if you are familiar with such tools like Artifactory and Jenkins. But after starting them we need to configure some things. First you need to initialize Maven repositories for Artifactory. You will be prompt for that just after a first launch. It also automatically add one remote repository: JCenter Bintray (https://bintray.com/bintray/jcenter), which is enough for our build. Jenkins also comes with default set of plugins, which you can install just after first launch (Install suggested plugins). For this demo, you will also have to install plugin for integration with Artifactory (https://wiki.jenkins.io/display/JENKINS/Artifactory+Plugin). If you need more details about Jenkins and Artifactory configuration you can refer to my older article How to setup Continuous Delivery environment.

2. Building contracts

We are beginning contract definition from the producer side application. Producer exposes only one GET /persons/{id} method that returns Person object. Here are the fields contained by Person class.

public class Person {

	private Integer id;
	private String firstName;
	private String lastName;
	@JsonFormat(pattern = "yyyy-MM-dd")
	private Date birthDate;
	private Gender gender;
	private Contact contact;
	private Address address;
	private String accountNo;

	// ...
}

The following picture illustrates, which fields of Person object are used by consumers. As you see, some of the fields are shared between consumers, while some other are required only by single consuming application.

contracts-4

Now we can take a look on contract definition between person-service and bank-service.

import org.springframework.cloud.contract.spec.Contract

Contract.make {
	request {
		method 'GET'
		urlPath('/persons/1')
	}
	response {
		status OK()
		body([
			id: 1,
			firstName: 'Piotr',
			lastName: 'Minkowski',
			gender: $(regex('(MALE|FEMALE)')),
			contact: ([
				email: $(regex(email())),
				phoneNo: $(regex('[0-9]{9}$'))
			])
		])
		headers {
			contentType(applicationJson())
		}
	}
}

For comparison, here’s definition of contract between person-service and letter-service.

import org.springframework.cloud.contract.spec.Contract

Contract.make {
	request {
		method 'GET'
		urlPath('/persons/1')
	}
	response {
		status OK()
		body([
			id: 1,
			firstName: 'Piotr',
			lastName: 'Minkowski',
			address: ([
				city: $(regex(alphaNumeric())),
				country: $(regex(alphaNumeric())),
				postalCode: $(regex('[0-9]{2}-[0-9]{3}')),
				houseNo: $(regex(positiveInt())),
				street: $(regex(nonEmpty()))
			])
		])
		headers {
			contentType(applicationJson())
		}
	}
}

3. Implementing tests on the producer side

Ok, we have three different contracts assigned to the single endpoint exposed by person-service. We need to publish them in such a way to that they are easily available for consumers. In that case Spring Cloud Contract comes with a handy solution. We may define contracts with different response for the same request, and than choose the appropriate definition on the consumer side. All those contract definitions will be published within the same JAR file. Because we have three consumers we define three different contracts placed in directories bank-consumer, contact-consumer and letter-consumer.

contracts-5

All the contracts will use a single base test class. To achieve it we need to provide a fully qualified name of that class for Spring Cloud Contract Verifier plugin in pom.xml.

<plugin>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-contract-maven-plugin</artifactId>
	<extensions>true</extensions>
	<configuration>
		<baseClassForTests>pl.piomin.services.person.BasePersonContractTest</baseClassForTests>
	</configuration>
</plugin>

Here’s the full definition of base class for our contract tests. We will mock the repository bean with the answer matching to the rules created inside contract files.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.DEFINED_PORT)
public abstract class BasePersonContractTest {

	@Autowired
	WebApplicationContext context;
	@MockBean
	PersonRepository repository;
	
	@Before
	public void setup() {
		RestAssuredMockMvc.webAppContextSetup(this.context);
		PersonBuilder builder = new PersonBuilder()
			.withId(1)
			.withFirstName("Piotr")
			.withLastName("Minkowski")
			.withBirthDate(new Date())
			.withAccountNo("1234567890")
			.withGender(Gender.MALE)
			.withPhoneNo("500070935")
			.withCity("Warsaw")
			.withCountry("Poland")
			.withHouseNo(200)
			.withStreet("Al. Jerozolimskie")
			.withEmail("piotr.minkowski@gmail.com")
			.withPostalCode("02-660");
		when(repository.findById(1)).thenReturn(builder.build());
	}
	
}

Spring Cloud Contract Maven plugin visible above is responsible for generating stubs from contract definitions. It is executed during Maven build after running mvn clean install command. The build is performed on Jenkins CI. Jenkins pipeline is responsible for updating remote Git repository, build binaries from source code, running automated tests and finally publishing JAR file containing stubs on a remote artifact repository – Artifactory. Here’s Jenkins pipeline created for the contract producer side (person-service).

node {
  withMaven(maven:'M3') {
    stage('Checkout') {
      git url: 'https://github.com/piomin/sample-spring-cloud-contract-ci.git', credentialsId: 'piomin-github', branch: 'master'
    }
    stage('Publish') {
      def server = Artifactory.server 'artifactory'
      def rtMaven = Artifactory.newMavenBuild()
      rtMaven.tool = 'M3'
      rtMaven.resolver server: server, releaseRepo: 'libs-release', snapshotRepo: 'libs-snapshot'
      rtMaven.deployer server: server, releaseRepo: 'libs-release-local', snapshotRepo: 'libs-snapshot-local'
      rtMaven.deployer.artifactDeploymentPatterns.addInclude("*stubs*")
      def buildInfo = rtMaven.run pom: 'person-service/pom.xml', goals: 'clean install'
      rtMaven.deployer.deployArtifacts buildInfo
      server.publishBuildInfo buildInfo
    }
  }
}

We also need to include dependency spring-cloud-starter-contract-verifier to the producer app to enable Spring Cloud Contract Verifier.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-contract-verifier</artifactId>
	<scope>test</scope>
</dependency>

4. Implementing tests on the consumer side

To enable Spring Cloud Contract on the consumer side we need to include artifact spring-cloud-starter-contract-stub-runner to the project dependencies.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
	<scope>test</scope>
</dependency>

Then, the only thing left is to build JUnit test, which verifies our contract by calling it through OpenFeign client. The configuration of that test is provided inside annotation @AutoConfigureStubRunner. We select the latest version of person-service stubs artifact by setting + in the version section of ids parameter. Because, we have multiple contracts defined inside person-service we need to choose the right for current service by setting consumer-name parameter. All the contract definitions are downloaded from Artifactory server, so we set stubsMode parameter to REMOTE. The address of Artifactory server has to be set using repositoryRoot property.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.NONE)
@AutoConfigureStubRunner(ids = {"pl.piomin.services:person-service:+:stubs:8090"}, consumerName = "letter-consumer",  stubsPerConsumer = true, stubsMode = StubsMode.REMOTE, repositoryRoot = "http://192.168.99.100:8081/artifactory/libs-snapshot-local")
@DirtiesContext
public class PersonConsumerContractTest {

	@Autowired
	private PersonClient personClient;
	
	@Test
	public void verifyPerson() {
		Person p = personClient.findPersonById(1);
		Assert.assertNotNull(p);
		Assert.assertEquals(1, p.getId().intValue());
		Assert.assertNotNull(p.getFirstName());
		Assert.assertNotNull(p.getLastName());
		Assert.assertNotNull(p.getAddress());
		Assert.assertNotNull(p.getAddress().getCity());
		Assert.assertNotNull(p.getAddress().getCountry());
		Assert.assertNotNull(p.getAddress().getPostalCode());
		Assert.assertNotNull(p.getAddress().getStreet());
		Assert.assertNotEquals(0, p.getAddress().getHouseNo());
	}
	
}

Here’s Feign client implementation responsible for calling endpoint exposed by person-service

@FeignClient("person-service")
public interface PersonClient {

	@GetMapping("/persons/{id}")
	Person findPersonById(@PathVariable("id") Integer id);
	
}

5. Setup of Continuous Integration process

Ok, we have already defined all the contracts required for our exercise. We have also build a pipeline responsible for building and publishing stubs with contracts on the producer side (person-service). It always publish the newest version of stubs generated from source code. Now, our goal is to launch pipelines defined for three consumer applications, each time when new stubs would be published to Artifactory server by producer pipeline.
The best solution for that would be to trigger a Jenkins build when you deploy an artifact. To achieve it we use Jenkins plugin called URLTrigger, that can be configured to watch for changes on a certain URL, in that case REST API endpoint exposed by Artifactory for selected repository path.
After installing URLTrigger plugin we have to enable it for all consumer pipelines. You can configure it to watch for changes in the returned JSON file from the Artifactory File List REST API, that is accessed via the following URI: http://192.168.99.100:8081/artifactory/api/storage/%5BPATH_TO_FOLDER_OR_REPO%5D/. The file maven-metadata.xml will change every time you deploy a new version of application to Artifactory. We can monitor the change of response’s content between the last two polls. The last field that has to be filled is Schedule. If you set it to * * * * * it will poll for a change every minute.

contracts-6

Our three pipelines for consumer applications are ready. The first run was finished with success.

contracts-7

If you have already build person-service application and publish stubs to Artifactory you will see the following structure in libs-snapshot-local repository. I have deployed three different versions of API exposed by person-service. Each time I publish new version of contract all the dependent pipelines are triggered to verify it.

contracts-8

The JAR file with contracts is published under classifier stubs.

contracts-9

Spring Cloud Contract Stub Runner tries to find the latest version of contracts.

2018-07-04 11:46:53.273  INFO 4185 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Desired version is [+] - will try to resolve the latest version
2018-07-04 11:46:54.752  INFO 4185 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Resolved version is [1.3-SNAPSHOT]
2018-07-04 11:46:54.823  INFO 4185 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Resolved artifact [pl.piomin.services:person-service:jar:stubs:1.3-SNAPSHOT] to /var/jenkins_home/.m2/repository/pl/piomin/services/person-service/1.3-SNAPSHOT/person-service-1.3-SNAPSHOT-stubs.jar

6. Testing change in contract

Ok, we have already prepared contracts and configured our CI environment. Now, let’s perform change in the API exposed by person-service. We will just change the name of one field: accountNo to accountNumber.

contracts-12

This changes requires a change in contract definition created on the producer side. If you modify the field name there person-service will build successfully and new version of contract will be published to Artifactory. Because all other pipelines listens for changes in the latest version of JAR files with stubs, the build will be started automatically. Microservices letter-service and contact-service do not use field accountNo, so their pipelines will not fail. Only bank-service pipeline report error in contract as shown on the picture below.

contracts-10

Now, if you were notified about failed verification of the newest contract version between person-service and bank-service, you can perform required change on the consumer side.

contracts-11

Building and testing message-driven microservices using Spring Cloud Stream

Spring Boot and Spring Cloud give you a great opportunity to build microservices fast using different styles of communication. You can create synchronous REST microservices based on Spring Cloud Netflix libraries as shown in one of my previous articles Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud. You can create asynchronous, reactive microservices deployed on Netty with Spring WebFlux project and combine it succesfully with some Spring Cloud libraries as shown in my article Reactive Microservices with Spring WebFlux and Spring Cloud. And finally, you may implement message-driven microservices based on publish/subscribe model using Spring Cloud Stream and message broker like Apache Kafka or RabbitMQ. The last of listed approaches to building microservices is the main subject of this article. I’m going to show you how to effectively build, scale, run and test messaging microservices basing on RabbitMQ broker.

Architecture

For the purpose of demonstrating Spring Cloud Stream features we will design a sample system which uses publish/subscribe model for inter-service communication. We have three microservices: order-service, product-service and account-service. Application order-service exposes HTTP endpoint that is responsible for processing orders sent to our system. All the incoming orders are processed asynchronously – order-service prepare and send message to RabbitMQ exchange and then respond to the calling client that the request has been accepted for processing. Applications account-service and product-service are listening for the order messages incoming to the exchange. Microservice account-service is responsible for checking if there are sufficient funds on customer’s account for order realization and then withdrawing cash from this account. Microservice product-service checks if there is sufficient amount of products in the store, and changes the number of available products after processing order. Both account-service and product-service send asynchronous response through RabbitMQ exchange (this time it is one-to-one communication using direct exchange) with a status of operation. Microservice order-service after receiving response messages sets the appropriate status of the order and exposes it through REST endpoint GET /order/{id} to the external client.

If you feel that the description of our sample system is a little incomprehensible, here’s the diagram with architecture for clarification.

stream-1

Enabling Spring Cloud Stream

The recommended way to include Spring Cloud Stream in the project is with a dependency management system. Spring Cloud Stream has an independent release trains management in relation to the whole Spring Cloud framework. However, if we have declared spring-cloud-dependencies in the Elmhurst.RELEASE version inside the dependencyManagement
section, we wouldn’t have to declare anything else in pom.xml. If you prefer to use only the Spring Cloud Stream project, you should define the following section.

<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-stream-dependencies</artifactId>
      <version>Elmhurst.RELEASE</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies>
</dependencyManagement>

The next step is to add spring-cloud-stream artifact to the project dependencies. I also recommend you include at least the spring-cloud-sleuth library to provide sending messaging with the same traceId as the source request incoming to order-service.

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-sleuth</artifactId>
</dependency>

Spring Cloud Stream programming model

To enable connectivity to a message broker for your application, annotate the main class with @EnableBinding. The @EnableBinding annotation takes one or more interfaces as parameters. You may choose between three interfaces provided by Spring Cloud Stream:

  • Sink: This is used for marking a service that receives messages from the inbound channel.
  • Source: This is used for sending messages to the outbound channel.
  • Processor: This can be used in case you need both an inbound channel and an outbound channel, as it extends the Source and Sink interfaces. Because order-service sends messages, as well as receives them, its main class has been annotated with @EnableBinding(Processor.class).

Here’s the main class of order-service that enables Spring Cloud Stream binding.

@SpringBootApplication
@EnableBinding(Processor.class)
public class OrderApplication {
  public static void main(String[] args) {
    new SpringApplicationBuilder(OrderApplication.class).web(true).run(args);
  }
}

Adding message broker

In Spring Cloud Stream nomenclature the implementation responsible for integration with specific message broker is called binder. By default, Spring Cloud Stream provides binder implementations for Kafka and RabbitMQ. It is able to automatically detect and use a binder found on the classpath. Any middleware-specific settings can be overridden through external configuration properties in the form supported by Spring Boot, such as application arguments, environment variables, or just the application.yml file. To include support for RabbitMQ, which used it this article as a message broker, you should add the following dependency to the project.

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-stream-rabbit</artifactId>
</dependency>

Now, our applications need to connected with one, shared instance of RabbitMQ broker. That’s why I run Docker image with RabbitMQ exposed outside on default 5672 port. It also launches web dashboard available under address http://192.168.99.100:15672.

$ docker run -d --name rabbit -p 15672:15672 -p 5672:5672 rabbitmq:management

We need to override default address of RabbitMQ for every Spring Boot application by settings property spring.rabbitmq.host to Docker machine IP 192.168.99.100.

spring:
  rabbitmq:
    host: 192.168.99.100
    port: 5672

Implementing message-driven microservices

Spring Cloud Stream is built on top of Spring Integration project. Spring Integration extends the Spring programming model to support the well-known Enterprise Integration Patterns (EIP). EIP defines a number of components that are typically used for orchestration in distributed systems. You have probably heard about patterns such as message channels, routers, aggregators, or endpoints. Let’s proceed to the implementation.
We begin from order-service, that is responsible for accepting orders, publishing them on shared topic and then collecting asynchronous responses from downstream services. Here’s the @Service, which builds message and publishes it to the remote topic using Source bean.

@Service
public class OrderSender {
  @Autowired
  private Source source;

  public boolean send(Order order) {
    return this.source.output().send(MessageBuilder.withPayload(order).build());
  }
}

That @Service is called by the controller, which exposes the HTTP endpoints for submitting new orders and getting order with status by id.

@RestController
public class OrderController {

	private static final Logger LOGGER = LoggerFactory.getLogger(OrderController.class);

	private ObjectMapper mapper = new ObjectMapper();

	@Autowired
	OrderRepository repository;
	@Autowired
	OrderSender sender;	

	@PostMapping
	public Order process(@RequestBody Order order) throws JsonProcessingException {
		Order o = repository.add(order);
		LOGGER.info("Order saved: {}", mapper.writeValueAsString(order));
		boolean isSent = sender.send(o);
		LOGGER.info("Order sent: {}", mapper.writeValueAsString(Collections.singletonMap("isSent", isSent)));
		return o;
	}

	@GetMapping("/{id}")
	public Order findById(@PathVariable("id") Long id) {
		return repository.findById(id);
	}

}

Now, let’s take a closer look on consumer side. The message sent by OrderSender bean from order-service is received by account-service and product-service. To receive the message from topic exchange, we just have to annotate the method that takes the Order object as a parameter with @StreamListener. We also have to define target channel for listener – in that case it is Processor.INPUT.

@SpringBootApplication
@EnableBinding(Processor.class)
public class OrderApplication {

	private static final Logger LOGGER = LoggerFactory.getLogger(OrderApplication.class);

	@Autowired
	OrderService service;

	public static void main(String[] args) {
		new SpringApplicationBuilder(OrderApplication.class).web(true).run(args);
	}

	@StreamListener(Processor.INPUT)
	public void receiveOrder(Order order) throws JsonProcessingException {
		LOGGER.info("Order received: {}", mapper.writeValueAsString(order));
		service.process(order);
	}

}

Received order is then processed by AccountService bean. Order may be accepted or rejected by account-service dependending on sufficient funds on customer’s account for order’s realization. The response with acceptance status is sent back to order-service via output channel invoked by the OrderSender bean.

@Service
public class AccountService {

	private static final Logger LOGGER = LoggerFactory.getLogger(AccountService.class);

	private ObjectMapper mapper = new ObjectMapper();

	@Autowired
	AccountRepository accountRepository;
	@Autowired
	OrderSender orderSender;

	public void process(final Order order) throws JsonProcessingException {
		LOGGER.info("Order processed: {}", mapper.writeValueAsString(order));
		List accounts =  accountRepository.findByCustomer(order.getCustomerId());
		Account account = accounts.get(0);
		LOGGER.info("Account found: {}", mapper.writeValueAsString(account));
		if (order.getPrice() <= account.getBalance()) {
			order.setStatus(OrderStatus.ACCEPTED);
			account.setBalance(account.getBalance() - order.getPrice());
		} else {
			order.setStatus(OrderStatus.REJECTED);
		}
		orderSender.send(order);
		LOGGER.info("Order response sent: {}", mapper.writeValueAsString(order));
	}

}

The last step is configuration. It is provided inside application.yml file. We have to properly define destinations for channels. While order-service is assigning orders-out destination to output channel, and orders-in destination to input channel, account-service and product-service do the opposite. It is logical, because message sent by order-service via its output destination is received by consuming services via their input destinations. But it is still the same destination on shared broker’s exchange. Here are configuration settings of order-service.

spring:
  cloud:
    stream:
      bindings:
        output:
          destination: orders-out
        input:
          destination: orders-in
      rabbit:
        bindings:
          input:
            consumer:
              exchangeType: direct

Here’s configuration provided for account-service and product-service.

spring:
  cloud:
    stream:
      bindings:
        output:
          destination: orders-in
        input:
          destination: orders-out
      rabbit:
        bindings:
          output:
            producer:
              exchangeType: direct
              routingKeyExpression: '"#"'

Finally, you can run our sample microservice. For now, we just need to run a single instance of each microservice. You can easily generate some test requests by running JUnit test class OrderControllerTest provided in my source code repository inside module order-service. This case is simple. In the next we will study more advanced sample with multiple running instances of consuming services.

Scaling up

To scale up our Spring Cloud Stream applications we just need to launch additional instances of each microservice. They will still listen for the incoming messages on the same topic exchange as the currently running instances. After adding one instance of account-service and product-service we may send a test order. The result of that test won’t be satisfactory for us… Why? A single order is received by all the running instances of every microservice. This is exactly how topic exchanges works – the message sent to topic is received by all consumers, which are listening on that topic. Fortunately, Spring Cloud Stream is able to solve that problem by providing solution called consumer group. It is responsible for guarantee that only one of the instances is expected to handle a given message, if they are placed in a competing consumer relationship. The transformation to consumer group mechanism when running multiple instances of the service has been visualized on the following figure.

stream-2

Configuration of a consumer group mechanism is not very difficult. We just have to set group parameter with name of the group for given destination. Here’s the current binding configuration for account-service. The orders-in destination is a queue created for direct communication with order-service, so only orders-out is grouped using spring.cloud.stream.bindings..group property.

spring:
  cloud:
    stream:
      bindings:
        output:
          destination: orders-in
        input:
          destination: orders-out
          group: account

Consumer group mechanisms is a concept taken from Apache Kafka, and implemented in Spring Cloud Stream also for RabbitMQ broker, which does not natively support it. So, I think it is pretty interesting how it is configured on RabbitMQ. If you run two instances of the service without setting group name on destination there are two bindings created for a single exchange (one binding per one instance) as shown in the picture below. Because two applications are listening on that exchange, there four bindings assigned to that exchange in total.

stream-3

If you set group name for selected destination Spring Cloud Stream will create a single binding for all running instances of given service. The name of binding will be suffixed with group name.

B08597_11_06

Because, we have included spring-cloud-starter-sleuth to the project dependencies the same traceId header is sent between all the asynchronous requests exchanged during realization of single request incoming to the order-service POST endpoint. Thanks to that we can easily correlate all logs using this header using Elastic Stack (Kibana).

B08597_11_05

Automated Testing

You can easily test your microservice without connecting to a message broker. To achieve it you need to include spring-cloud-stream-test-support to your project dependencies. It contains the TestSupportBinder bean that lets you interact with the bound channels and inspect any messages sent and received by the application.

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-stream-test-support</artifactId>
  <scope>test</scope>
</dependency>

In the test class we need to declare MessageCollector bean, which is responsible for receiving messages retained by TestSupportBinder. Here’s my test class from account-service. Using Processor bean I send test order to input channel. Then MessageCollector receives message that is sent back to order-service via output channel. Test method testAccepted creates order that should be accepted by account-service, while testRejected method sets too high order price that results in rejecting the order.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class OrderReceiverTest {

	private static final Logger LOGGER = LoggerFactory.getLogger(OrderReceiverTest.class);

	@Autowired
	private Processor processor;
	@Autowired
	private MessageCollector messageCollector;

	@Test
	@SuppressWarnings("unchecked")
	public void testAccepted() {
		Order o = new Order();
		o.setId(1L);
		o.setAccountId(1L);
		o.setCustomerId(1L);
		o.setPrice(500);
		o.setProductIds(Collections.singletonList(2L));
		processor.input().send(MessageBuilder.withPayload(o).build());
		Message received = (Message) messageCollector.forChannel(processor.output()).poll();
		LOGGER.info("Order response received: {}", received.getPayload());
		assertNotNull(received.getPayload());
		assertEquals(OrderStatus.ACCEPTED, received.getPayload().getStatus());
	}

	@Test
	@SuppressWarnings("unchecked")
	public void testRejected() {
		Order o = new Order();
		o.setId(1L);
		o.setAccountId(1L);
		o.setCustomerId(1L);
		o.setPrice(100000);
		o.setProductIds(Collections.singletonList(2L));
		processor.input().send(MessageBuilder.withPayload(o).build());
		Message received = (Message) messageCollector.forChannel(processor.output()).poll();
		LOGGER.info("Order response received: {}", received.getPayload());
		assertNotNull(received.getPayload());
		assertEquals(OrderStatus.REJECTED, received.getPayload().getStatus());
	}

}

Conclusion

Message-driven microservices are a good choice whenever you don’t need synchronous response from your API. In this article I have shown sample use case of publish/subscribe model in inter-service communication between your microservices. The source code is as usual available on GitHub (https://github.com/piomin/sample-message-driven-microservices.git). For more interesting examples with usage of Spring Cloud Stream library, also with Apache Kafka, you can refer to Chapter 11 in my book Mastering Spring Cloud (https://www.packtpub.com/application-development/mastering-spring-cloud).

Managing Spring Boot apps locally with Trampoline

Today I came across interesting solution for managing Spring Boot applications locally – Trampoline. It is rather a simple product, that provides web console allowing you to start, stop and monitor your application. However, it can sometimes be useful, especially if you run many different applications locally during microservices development. In this article I’m going to show the main features provided by Trampoline.

How it works

Trampoline is also Spring Boot application, so you can easily start it using your IDE or with java -jar command after building the project with mvn clean install. By default web console is available on 8080 port, but you can easily override it with server.port parameter. It allows you to:

  • Start your application – it is realized by running Maven Spring Boot plugin command mvn spring-boot:run that build the binary from source code and run Java application
  • Shutdown your application – it is realized by calling Spring Boot Actuator /shutdown endpoint that performs gracefully shutdown of your application
  • Monitor your application – it displays some basic information retrieved from Spring Boot Actuator endpoints like trace, logs, metrics and Git commit data.

Setup

First, you need to clone Trampoline repository from GitHub. It is available here: https://github.com/ErnestOrt/Trampoline.git. The application is available inside trampoline directory. You can run its main class Application or just run Maven command mvn spring-boot:run. And it is all. Trampoline is available under address http://localhost:8080.

Configuring applications

We will use one of my previous sample of microservices built with Spring Boot 2.0. It is available on my GitHub account in repository sample-spring-microservices-new available here: https://github.com/piomin/sample-spring-microservices-new.git. Before deploying these microservices on Trampoline we need to perform some minor changes. First, all the microservices have to expose Spring Boot Actuator endpoints. Be sure that endpoint /shutdown is enabled. All changes should be perform in Spring Boot YAML configuration files, which are stored on config-service.

management:
  endpoint.shutdown.enabled: true
  endpoints.web.exposure.include: '*'

If you would like to provide information about last commit you should include Maven plugin git-commit-id-plugin, which is executed during application build. Of course, you also need to add spring-boot-maven-plugin plugin, which is used for building and running Spring Boot application from Maven. All the required changes are available in branch trampoline (https://github.com/piomin/sample-spring-microservices-new/tree/trampoline).

<build>
	<plugins>
		<plugin>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-maven-plugin</artifactId>
		</plugin>
		<plugin>
			<groupId>pl.project13.maven</groupId>
			<artifactId>git-commit-id-plugin</artifactId>
		</plugin>
	</plugins>
</build>

Adding microservices

The further configuration will be provided using Trampoline web console. First, got to section SETTINGS. You need to register every single instance of your microservices. You can register:

  • External, already running application by providing its IP address and HTTP port
  • Git repository with your microservice, which then will be cloned into your machine
  • Git repository with your microservice existing on the local machine just by providing its location

I have cloned the repository with microservices by myself, so I’m selecting a third choice. Inside Register Microservice form we have to set microservice name, port, actuator endpoint context path, default build tool and Maven pom.xml file location.

trampoline-1

It is important to remember about setting Maven home location in the panel Maven Settings. After registering all sample microservices (config-service, discovery-service, gateway-service, and three Spring Cloud applications) we may add them to one group. It is very useful feature, because then we could deploy them all with one click.

trampoline-2

Here’s the full list of services registered in Trampoline.

trampoline-3

Managing microservices

Now, we can navigate to the section INSTANCES. We can launch single instance of microservices or a group of microservices. If you would like to launch a single instance just select it from list on Launch Instance panel and click button Launch. It immediately starts new command window, builds your application from source code and launches it under selected port.

trampoline-4

The list of running microservices is available below. You can see there application’s HTTP port and status. You may also display trace, logs or metrics by clicking on one of icon available at every row.

trampoline-5

Here’s an information about last commit for discovery-service.

trampoline-6

If you decide to restart an application Trampoline sends request to /shutdown endpoint, rebuilds your application with newest version of code and runs it again. Alternatively, you may use Spring Boot Devtools (by including dependency org.springframework.boot:spring-boot-devtools), which forces your application to be restarted after source code modification. Because Trampoline is continuously monitoring status of all registered applications by calling its actuator endpoints you will still see the full list of running microservices.