Integration tests on OpenShift using Arquillian Cube and Istio

Building integration tests for applications deployed on Kubernetes/OpenShift platforms seems to be quite a big challenge. With Arquillian Cube, an Arquillian extension for managing Docker containers, it is not complicated. Kubernetes extension, being a part of Arquillian Cube, helps you write and run integration tests for your Kubernetes/Openshift application. It is responsible for creating and managing temporary namespace for your tests, applying all Kubernetes resources required to setup your environment and once everything is ready it will just run defined integration tests.
The one very good information related to Arquillian Cube is that it supports Istio framework. You can apply Istio resources before executing tests. One of the most important features of Istio is an ability to control of traffic behavior with rich routing rules, retries, delays, failovers, and fault injection. It allows you to test some unexpected situations during network communication between microservices like server errors or timeouts.
If you would like to run some tests using Istio resources on Minishift you should first install it on your platform. To do that you need to change some privileges for your OpenShift user. Let’s do that.

1. Enabling Istio on Minishift

Istio requires some high-level privileges to be able to run on OpenShift. To add those privileges to the current user we need to login as an user with cluster admin role. First, we should enable admin-user addon on Minishift by executing the following command.

$ minishift addons enable admin-user

After that you would be able to login as system:admin user, which has cluster-admin role. With this user you can also add cluster-admin role to other users, for example admin. Let’s do that.

$ oc login -u system:admin
$ oc adm policy add-cluster-role-to-user cluster-admin admin
$ oc login -u admin -p admin

Now, let’s create new project dedicated especially for Istio and then add some required privileges.

$ oc new-project istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-ingress-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z default -n istio-system
$ oc adm policy add-scc-to-user anyuid -z prometheus -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-egressgateway-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-citadel-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-ingressgateway-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-cleanup-old-ca-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-mixer-post-install-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-mixer-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-pilot-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-sidecar-injector-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-galley-service-account -n istio-system
$ oc adm policy add-scc-to-user privileged -z default -n myproject

Finally, we may proceed to Istio components installation. I downloaded the current newest version of Istio – 1.0.1. Installation file is available under install/kubernetes directory. You just have to apply it to your Minishift instance by calling oc apply command.

$ oc apply -f install/kubernetes/istio-demo.yaml

2. Enabling Istio for Arquillian Cube

I have already described how to use Arquillian Cube to run tests with OpenShift in the article Testing microservices on OpenShift using Arquillian Cube. In comparison with the sample described in that article we need to include dependency responsible for enabling Istio features.

<dependency>
	<groupId>org.arquillian.cube</groupId>
	<artifactId>arquillian-cube-istio-kubernetes</artifactId>
	<version>1.17.1</version>
	<scope>test</scope>
</dependency>

Now, we can use @IstioResource annotation to apply Istio resources into OpenShift cluster or IstioAssistant bean to be able to use some additional methods for adding, removing resources programmatically or polling an availability of URLs.
Let’s take a look on the following JUnit test class using Arquillian Cube with Istio support. In addition to the standard test created for running on OpenShift instance I have added Istio resource file customer-to-account-route.yaml. Then I have invoked method await provided by IstioAssistant. First test test1CustomerRoute creates new customer, so it needs to wait until customer-route is deployed on OpenShift. The next test test2AccountRoute adds account for the newly created customer, so it needs to wait until account-route is deployed on OpenShift. Finally, the test test3GetCustomerWithAccounts is ran, which calls the method responsible for finding customer by id with list of accounts. In that case customer-service calls method endpoint by account-service. As you have probably find out the last line of that test method contains an assertion to empty list of accounts: Assert.assertTrue(c.getAccounts().isEmpty()). Why? We will simulate the timeout in communication between customer-service and account-service using Istio rules.

@Category(RequiresOpenshift.class)
@RequiresOpenshift
@Templates(templates = {
        @Template(url = "classpath:account-deployment.yaml"),
        @Template(url = "classpath:deployment.yaml")
})
@RunWith(ArquillianConditionalRunner.class)
@IstioResource("classpath:customer-to-account-route.yaml")
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class IstioRuleTest {

    private static final Logger LOGGER = LoggerFactory.getLogger(IstioRuleTest.class);
    private static String id;

    @ArquillianResource
    private IstioAssistant istioAssistant;

    @RouteURL(value = "customer-route", path = "/customer")
    private URL customerUrl;
    @RouteURL(value = "account-route", path = "/account")
    private URL accountUrl;

    @Test
    public void test1CustomerRoute() {
        LOGGER.info("URL: {}", customerUrl);
        istioAssistant.await(customerUrl, r -> r.isSuccessful());
        LOGGER.info("URL ready. Proceeding to the test");
        OkHttpClient httpClient = new OkHttpClient();
        RequestBody body = RequestBody.create(MediaType.parse("application/json"), "{\"name\":\"John Smith\", \"age\":33}");
        Request request = new Request.Builder().url(customerUrl).post(body).build();
        try {
            Response response = httpClient.newCall(request).execute();
            ResponseBody b = response.body();
            String json = b.string();
            LOGGER.info("Test: response={}", json);
            Assert.assertNotNull(b);
            Assert.assertEquals(200, response.code());
            Customer c = Json.decodeValue(json, Customer.class);
            this.id = c.getId();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    @Test
    public  void test2AccountRoute() {
        LOGGER.info("Route URL: {}", accountUrl);
        istioAssistant.await(accountUrl, r -> r.isSuccessful());
        LOGGER.info("URL ready. Proceeding to the test");
        OkHttpClient httpClient = new OkHttpClient();
        RequestBody body = RequestBody.create(MediaType.parse("application/json"), "{\"number\":\"01234567890\", \"balance\":10000, \"customerId\":\"" + this.id + "\"}");
        Request request = new Request.Builder().url(accountUrl).post(body).build();
        try {
            Response response = httpClient.newCall(request).execute();
            ResponseBody b = response.body();
            String json = b.string();
            LOGGER.info("Test: response={}", json);
            Assert.assertNotNull(b);
            Assert.assertEquals(200, response.code());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    @Test
    public void test3GetCustomerWithAccounts() {
        String url = customerUrl + "/" + id;
        LOGGER.info("Calling URL: {}", customerUrl);
        OkHttpClient httpClient = new OkHttpClient();
        Request request = new Request.Builder().url(url).get().build();
        try {
            Response response = httpClient.newCall(request).execute();
            String json = response.body().string();
            LOGGER.info("Test: response={}", json);
            Assert.assertNotNull(response.body());
            Assert.assertEquals(200, response.code());
            Customer c = Json.decodeValue(json, Customer.class);
            Assert.assertTrue(c.getAccounts().isEmpty());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

}

3. Creating Istio rules

On of the interesting features provided by Istio is an availability of injecting faults to the route rules. we can specify one or more faults to inject while forwarding HTTP requests to the rule’s corresponding request destination. The faults can be either delays or aborts. We can define a percentage level of error using percent field for the both types of fault. In the following Istio resource I have defines 2 seconds delay for every single request sent to account-service.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: account-service
spec:
  hosts:
    - account-service
  http:
  - fault:
      delay:
        fixedDelay: 2s
        percent: 100
    route:
    - destination:
        host: account-service
        subset: v1

Besides VirtualService we also need to define DestinationRule for account-service. It is really simple – we have just define version label of the target service.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: account-service
spec:
  host: account-service
  subsets:
  - name: v1
    labels:
      version: v1

Before running the test we should also modify OpenShift deployment templates of our sample applications. We need to inject some Istio resources into the pods definition using istioctl kube-inject command as shown below.

$ istioctl kube-inject -f deployment.yaml -o customer-deployment-istio.yaml
$ istioctl kube-inject -f account-deployment.yaml -o account-deployment-istio.yaml

Finally, we may rewrite generated files into OpenShift templates. Here’s the fragment of Openshift template containing DeploymentConfig definition for account-service.

kind: Template
apiVersion: v1
metadata:
  name: account-template
objects:
  - kind: DeploymentConfig
    apiVersion: v1
    metadata:
      name: account-service
      labels:
        app: account-service
        name: account-service
        version: v1
    spec:
      template:
        metadata:
          annotations:
            sidecar.istio.io/status: '{"version":"364ad47b562167c46c2d316a42629e370940f3c05a9b99ccfe04d9f2bf5af84d","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
          name: account-service
          labels:
            app: account-service
            name: account-service
            version: v1
        spec:
          containers:
          - env:
            - name: DATABASE_NAME
              valueFrom:
                secretKeyRef:
                  key: database-name
                  name: mongodb
            - name: DATABASE_USER
              valueFrom:
                secretKeyRef:
                  key: database-user
                  name: mongodb
            - name: DATABASE_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: database-password
                  name: mongodb
            image: piomin/account-vertx-service
            name: account-vertx-service
            ports:
            - containerPort: 8095
            resources: {}
          - args:
            - proxy
            - sidecar
            - --configPath
            - /etc/istio/proxy
            - --binaryPath
            - /usr/local/bin/envoy
            - --serviceCluster
            - account-service
            - --drainDuration
            - 45s
            - --parentShutdownDuration
            - 1m0s
            - --discoveryAddress
            - istio-pilot.istio-system:15007
            - --discoveryRefreshDelay
            - 1s
            - --zipkinAddress
            - zipkin.istio-system:9411
            - --connectTimeout
            - 10s
            - --statsdUdpAddress
            - istio-statsd-prom-bridge.istio-system:9125
            - --proxyAdminPort
            - "15000"
            - --controlPlaneAuthPolicy
            - NONE
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: INSTANCE_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: ISTIO_META_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: ISTIO_META_INTERCEPTION_MODE
              value: REDIRECT
            image: gcr.io/istio-release/proxyv2:1.0.1
            imagePullPolicy: IfNotPresent
            name: istio-proxy
            resources:
              requests:
                cpu: 10m
            securityContext:
              readOnlyRootFilesystem: true
              runAsUser: 1337
            volumeMounts:
            - mountPath: /etc/istio/proxy
              name: istio-envoy
            - mountPath: /etc/certs/
              name: istio-certs
              readOnly: true
          initContainers:
          - args:
            - -p
            - "15001"
            - -u
            - "1337"
            - -m
            - REDIRECT
            - -i
            - '*'
            - -x
            - ""
            - -b
            - 8095,
            - -d
            - ""
            image: gcr.io/istio-release/proxy_init:1.0.1
            imagePullPolicy: IfNotPresent
            name: istio-init
            resources: {}
            securityContext:
              capabilities:
                add:
                - NET_ADMIN
          volumes:
          - emptyDir:
              medium: Memory
            name: istio-envoy
          - name: istio-certs
            secret:
              optional: true
              secretName: istio.default

4. Building applications

The sample applications are implemented using Eclipse Vert.x framework. They use Mongo database for storing data. The connection settings are injected into pods using Kubernetes Secrets.

public class MongoVerticle extends AbstractVerticle {

	private static final Logger LOGGER = LoggerFactory.getLogger(MongoVerticle.class);

	@Override
	public void start() throws Exception {
		ConfigStoreOptions envStore = new ConfigStoreOptions()
				.setType("env")
				.setConfig(new JsonObject().put("keys", new JsonArray().add("DATABASE_USER").add("DATABASE_PASSWORD").add("DATABASE_NAME")));
		ConfigRetrieverOptions options = new ConfigRetrieverOptions().addStore(envStore);
		ConfigRetriever retriever = ConfigRetriever.create(vertx, options);
		retriever.getConfig(r -> {
			String user = r.result().getString("DATABASE_USER");
			String password = r.result().getString("DATABASE_PASSWORD");
			String db = r.result().getString("DATABASE_NAME");
			JsonObject config = new JsonObject();
			LOGGER.info("Connecting {} using {}/{}", db, user, password);
			config.put("connection_string", "mongodb://" + user + ":" + password + "@mongodb/" + db);
			final MongoClient client = MongoClient.createShared(vertx, config);
			final CustomerRepository service = new CustomerRepositoryImpl(client);
			ProxyHelper.registerService(CustomerRepository.class, vertx, service, "customer-service");	
		});
	}
}

MongoDB should be started on OpenShift before starting any applications, which connect to it. To achieve it we should insert Mongo deployment resource into Arquillian configuration file as env.config.resource.name field.
The configuration of Arquillian Cube is visible below. We will use an existing namespace myproject, which has already granted the required privileges (see Step 1). We also need to pass authentication token of user admin. You can collect it using command oc whoami -t after login to OpenShift cluster.

<extension qualifier="openshift">
	<property name="namespace.use.current">true</property>
	<property name="namespace.use.existing">myproject</property>
	<property name="kubernetes.master">https://192.168.99.100:8443</property>
	<property name="cube.auth.token">TYYccw6pfn7TXtH8bwhCyl2tppp5MBGq7UXenuZ0fZA</property>
	<property name="env.config.resource.name">mongo-deployment.yaml</property>
</extension>

The communication between customer-service and account-service is realized by Vert.x WebClient. We will set read timeout for the client to 1 second. Because Istio injects 2 seconds delay into the route, the communication is going to end with timeout.

public class AccountClient {

	private static final Logger LOGGER = LoggerFactory.getLogger(AccountClient.class);
	private Vertx vertx;

	public AccountClient(Vertx vertx) {
		this.vertx = vertx;
	}
	
	public AccountClient findCustomerAccounts(String customerId, Handler<AsyncResult<List>> resultHandler) {
		WebClient client = WebClient.create(vertx);
		client.get(8095, "account-service", "/account/customer/" + customerId).timeout(1000).send(res2 -> {
			if (res2.succeeded()) {
				LOGGER.info("Response: {}", res2.result().bodyAsString());
				List accounts = res2.result().bodyAsJsonArray().stream().map(it -> Json.decodeValue(it.toString(), Account.class)).collect(Collectors.toList());
				resultHandler.handle(Future.succeededFuture(accounts));
			} else {
				resultHandler.handle(Future.succeededFuture(new ArrayList()));
			}
		});
		return this;
	}
}

The full code of sample applications is available on GitHub in the repository https://github.com/piomin/sample-vertx-kubernetes/tree/openshift-istio-tests.

5. Running tests

You can the tests during Maven build or just using your IDE. As the first test1CustomerRoute test is executed. It adds new customer and save generated id for two next tests.

arquillian-istio-3

The next test is test2AccountRoute. It adds an account for the customer created during previous test.

arquillian-istio-2

Finally, the test responsible for verifying communication between microservices is running. It verifies if the list of accounts is empty, what is a result of timeout in communication with account-service.

arquillian-istio-1

Advertisements

Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker

Here’s the next article in a series of “Quick Guide to…”. This time we will discuss and run examples of Spring Boot microservices on Kubernetes. The structure of that article will be quite similar to this one Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud, as they are describing the same aspects of applications development. I’m going to focus on showing you the differences and similarities in development between for Spring Cloud and for Kubernetes. The topics covered in this article are:

  • Using Spring Boot 2.0 in cloud-native development
  • Providing service discovery for all microservices using Spring Cloud Kubernetes project
  • Injecting configuration settings into application pods using Kubernetes Config Maps and Secrets
  • Building application images using Docker and deploying them on Kubernetes using YAML configuration files
  • Using Spring Cloud Kubernetes together with Zuul proxy to expose a single Swagger API documentation for all microservices

Spring Cloud and Kubernetes may be threaten as a competitive solutions when you build microservices environment. Such components like Eureka, Spring Cloud Config or Zuul provided by Spring Cloud may be replaced by built-in Kubernetes objects like services, config maps, secrets or ingresses. But even if you decide to use Kubernetes components instead of Spring Cloud you can take advantage of some interesting features provided throughout the whole Spring Cloud project.

The one raelly interesting project that helps us in development is Spring Cloud Kubernetes (https://github.com/spring-cloud-incubator/spring-cloud-kubernetes). Although it is still in incubation stage it is definitely worth to dedicating some time to it. It integrates Spring Cloud with Kubernetes. I’ll show you how to use implementation of discovery client, inter-service communication with Ribbon client and Zipkin discovery using Spring Cloud Kubernetes.

Before we proceed to the source code, let’s take a look on the following diagram. It illustrates the architecture of our sample system. It is quite similar to the architecture presented in the already mentioned article about microservices on Spring Cloud. There are three independent applications (employee-service, department-service, organization-service), which communicate between each other through REST API. These Spring Boot microservices use some build-in mechanisms provided by Kubernetes: config maps and secrets for distributed configuration, etcd for service discovery, and ingresses for API gateway.

micro-kube-1

Let’s proceed to the implementation. Currently, the newest stable version of Spring Cloud is Finchley.RELEASE. This version of spring-cloud-dependencies should be declared as a BOM for dependency management.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Finchley.RELEASE</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

Spring Cloud Kubernetes is not released under Spring Cloud Release Trains. So, we need to explicitly define its version. Because we use Spring Boot 2.0 we have to include the newest SNAPSHOT version of spring-cloud-kubernetes artifacts, which is 0.3.0.BUILD-SNAPSHOT.

The source code of sample applications presented in this article is available on GitHub in repository https://github.com/piomin/sample-spring-microservices-kubernetes.git.

Pre-requirements

In order to be able to deploy and test our sample microservices we need to prepare a development environment. We can realize that in the following steps:

  • You need at least a single node cluster instance of Kubernetes (Minikube) or Openshift (Minishift) running on your local machine. You should start it and expose embedded Docker client provided by both of them. The detailed intruction for Minishift may be found there: Quick guide to deploying Java apps on OpenShift. You can also use that description to run Minikube – just replace word ‘minishift’ with ‘minikube’. In fact, it does not matter if you choose Kubernetes or Openshift – the next part of this tutorial would be applicable for both of them
  • Spring Cloud Kubernetes requires access to Kubernetes API in order to be able to retrieve a list of address of pods running for a single service. If you use Kubernetes you should just execute the following command:
$ kubectl create clusterrolebinding admin --clusterrole=cluster-admin --serviceaccount=default:default

If you deploy your microservices on Minishift you should first enable admin-user addon, then login as a cluster admin, and grant required permissions.

$ minishift addons enable admin-user
$ oc login -u system:admin
$ oc policy add-role-to-user cluster-reader system:serviceaccount:myproject:default
  • All our sample microservices use MongoDB as a backend store. So, you should first run an instance of this database on your node. With Minishift it is quite simple, as you can use predefined templates just by selecting service Mongo on the Catalog list. With Kubernetes the task is more difficult. You have to prepare deployment configuration files by yourself and apply it to the cluster. All the configuration files are available under kubernetes directory inside sample Git repository. To apply the following YAML definition to the cluster you should execute command kubectl apply -f kubernetes\mongo-deployment.yaml. After it Mongo database would be available under the name mongodb inside Kubernetes cluster.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
      - name: mongodb
        image: mongo:latest
        ports:
        - containerPort: 27017
        env:
        - name: MONGO_INITDB_DATABASE
          valueFrom:
            configMapKeyRef:
              name: mongodb
              key: database-name
        - name: MONGO_INITDB_ROOT_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-user
        - name: MONGO_INITDB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-password
---
apiVersion: v1
kind: Service
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  ports:
  - port: 27017
    protocol: TCP
  selector:
    app: mongodb

1. Inject configuration with Config Maps and Secrets

When using Spring Cloud the most obvious choice for realizing distributed configuration in your system is Spring Cloud Config. With Kubernetes you can use Config Map. It holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data. It is used for storing and sharing non-sensitive, unencrypted configuration information. To use sensitive information in your clusters, you must use Secrets. An usage of both these Kubernetes objects can be perfectly demonstrated basing on the example of MongoDB connection settings. Inside Spring Boot application we can easily inject it using environment variables. Here’s fragment of application.yml file with URI configuration.

spring:
  data:
    mongodb:
      uri: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongodb/${MONGO_DATABASE}

While username or password are a sensitive fields, a database name is not. So we can place it inside config map.

apiVersion: v1
kind: ConfigMap
metadata:
  name: mongodb
data:
  database-name: microservices

Of course, username and password are defined as secrets.

apiVersion: v1
kind: Secret
metadata:
  name: mongodb
type: Opaque
data:
  database-password: MTIzNDU2
  database-user: cGlvdHI=

To apply the configuration to Kubernetes cluster we run the following commands.

$ kubectl apply -f kubernetes/mongodb-configmap.yaml
$ kubectl apply -f kubernetes/mongodb-secret.yaml

After it we should inject the configuration properties into application’s pods. When defining container configuration inside Deployment YAML file we have to include references to environment variables and secrets as shown below

apiVersion: apps/v1
kind: Deployment
metadata:
  name: employee
  labels:
    app: employee
spec:
  replicas: 1
  selector:
    matchLabels:
      app: employee
  template:
    metadata:
      labels:
        app: employee
    spec:
      containers:
      - name: employee
        image: piomin/employee:1.0
        ports:
        - containerPort: 8080
        env:
        - name: MONGO_DATABASE
          valueFrom:
            configMapKeyRef:
              name: mongodb
              key: database-name
        - name: MONGO_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-user
        - name: MONGO_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-password

2. Building service discovery with Kubernetes

We usually running microservices on Kubernetes using Docker containers. One or more containers are grouped by pods, which are the smallest deployable units created and managed in Kubernetes. A good practice is to run only one container inside a single pod. If you would like to scale up your microservice you would just have to increase a number of running pods. All running pods that belong to a single microservice are logically grouped by Kubernetes Service. This service may be visible outside the cluster, and is able to load balance incoming requests between all running pods. The following service definition groups all pods labelled with field app equaled to employee.

apiVersion: v1
kind: Service
metadata:
  name: employee
  labels:
    app: employee
spec:
  ports:
  - port: 8080
    protocol: TCP
  selector:
    app: employee

Service can be used for accessing application outside Kubernetes cluster or for inter-service communication inside a cluster. However, the communication between microservices can be implemented more comfortable with Spring Cloud Kubernetes. First we need to include the following dependency to project pom.xml.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>

Then we should enable discovery client for an application – the same as we have always done for discovery Spring Cloud Netflix Eureka. This allows you to query Kubernetes endpoints (services) by name. This discovery feature is also used by the Spring Cloud Kubernetes Ribbon or Zipkin projects to fetch respectively the list of the pods defined for a microservice to be load balanced or the Zipkin servers available to send the traces or spans.

@SpringBootApplication
@EnableDiscoveryClient
@EnableMongoRepositories
@EnableSwagger2
public class EmployeeApplication {

	public static void main(String[] args) {
		SpringApplication.run(EmployeeApplication.class, args);
	}
	
	// ...
}

The last important thing in this section is to guarantee that Spring application name would be exactly the same as Kubernetes service name for the application. For application employee-service it is employee.

spring:
  application:
    name: employee

3. Building microservice using Docker and deploying on Kubernetes

There is nothing unusual in our sample microservices. We have included some standard Spring dependencies for building REST-based microservices, integrating with MongoDB and generating API documentation using Swagger2.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.9.2</version>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

In order to integrate with MongoDB we should create interface that extends standard Spring Data CrudRepository.

public interface EmployeeRepository extends CrudRepository {
	
	List findByDepartmentId(Long departmentId);
	List findByOrganizationId(Long organizationId);
	
}

Entity class should be annotated with Mongo @Document and a primary key field with @Id.

@Document(collection = "employee")
public class Employee {

	@Id
	private String id;
	private Long organizationId;
	private Long departmentId;
	private String name;
	private int age;
	private String position;
	
	// ...
	
}

The repository bean has been injected to the controller class. Here’s the full implementation of our REST API inside employee-service.

@RestController
public class EmployeeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);
	
	@Autowired
	EmployeeRepository repository;
	
	@PostMapping("/")
	public Employee add(@RequestBody Employee employee) {
		LOGGER.info("Employee add: {}", employee);
		return repository.save(employee);
	}
	
	@GetMapping("/{id}")
	public Employee findById(@PathVariable("id") String id) {
		LOGGER.info("Employee find: id={}", id);
		return repository.findById(id).get();
	}
	
	@GetMapping("/")
	public Iterable findAll() {
		LOGGER.info("Employee find");
		return repository.findAll();
	}
	
	@GetMapping("/department/{departmentId}")
	public List findByDepartment(@PathVariable("departmentId") Long departmentId) {
		LOGGER.info("Employee find: departmentId={}", departmentId);
		return repository.findByDepartmentId(departmentId);
	}
	
	@GetMapping("/organization/{organizationId}")
	public List findByOrganization(@PathVariable("organizationId") Long organizationId) {
		LOGGER.info("Employee find: organizationId={}", organizationId);
		return repository.findByOrganizationId(organizationId);
	}
	
}

In order to run our microservices on Kubernetes we should first build the whole Maven project with mvn clean install command. Each microservice has Dockerfile placed in the root directory. Here’s Dockerfile definition for employee-service.

FROM openjdk:8-jre-alpine
ENV APP_FILE employee-service-1.0-SNAPSHOT.jar
ENV APP_HOME /usr/apps
EXPOSE 8080
COPY target/$APP_FILE $APP_HOME/
WORKDIR $APP_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

Let’s build Docker images for all three sample microservices.

$ cd employee-service
$ docker build -t piomin/employee:1.0 .
$ cd department-service
$ docker build -t piomin/department:1.0 .
$ cd organization-service
$ docker build -t piomin/organization:1.0 .

The last step is to deploy Docker containers with applications on Kubernetes. To do that just execute commands kubectl apply on YAML configuration files. The sample deployment file for employee-service has been demonstrated in step 1. All required deployment fields are available inside project repository in kubernetes directory.

$ kubectl apply -f kubernetes\employee-deployment.yaml
$ kubectl apply -f kubernetes\department-deployment.yaml
$ kubectl apply -f kubernetes\organization-deployment.yaml

4. Communication between microservices with Spring Cloud Kubernetes Ribbon

All the microservice are deployed on Kubernetes. Now, it’s worth to discuss some aspects related to inter-service communication. Application employee-service in contrast to other microservices did not invoke any other microservices. Let’s take a look on to other microservices that calls API exposed by employee-service and communicates between each other (organization-service calls department-service API).
First we need to include some additional dependencies to the project. We use Spring Cloud Ribbon and OpenFeign. Alternatively you can also use Spring @LoadBalanced RestTemplate.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>

Here’s the main class of department-service. It enables Feign client using @EnableFeignClients annotation. It works the same as with discovery based on Spring Cloud Netflix Eureka. OpenFeign uses Ribbon for client-side load balancing. Spring Cloud Kubernetes Ribbon provides some beans that forces Ribbon to communicate with Kubernetes API through Fabric8 KubernetesClient.

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
@EnableMongoRepositories
@EnableSwagger2
public class DepartmentApplication {
	
	public static void main(String[] args) {
		SpringApplication.run(DepartmentApplication.class, args);
	}
	
	// ...
	
}

Here’s implementation of Feign client for calling method exposed by employee-service.

@FeignClient(name = "employee")
public interface EmployeeClient {

	@GetMapping("/department/{departmentId}")
	List findByDepartment(@PathVariable("departmentId") String departmentId);
	
}

Finally, we have to inject Feign client’s beans to the REST controller. Now, we may call the method defined inside EmployeeClient, which is equivalent to calling REST endpoints.

@RestController
public class DepartmentController {

	private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentController.class);
	
	@Autowired
	DepartmentRepository repository;
	@Autowired
	EmployeeClient employeeClient;
	
	// ...
	
	@GetMapping("/organization/{organizationId}/with-employees")
	public List findByOrganizationWithEmployees(@PathVariable("organizationId") Long organizationId) {
		LOGGER.info("Department find: organizationId={}", organizationId);
		List departments = repository.findByOrganizationId(organizationId);
		departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
		return departments;
	}
	
}

5. Building API gateway using Kubernetes Ingress

An Ingress is a collection of rules that allow incoming requests to reach the downstream services. In our microservices architecture ingress is playing a role of an API gateway. To create it we should first prepare YAML description file. The descriptor file should contain the hostname under which the gateway will be available and mapping rules to the downstream services.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gateway-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  backend:
    serviceName: default-http-backend
    servicePort: 80
  rules:
  - host: microservices.info
    http:
      paths:
      - path: /employee
        backend:
          serviceName: employee
          servicePort: 8080
      - path: /department
        backend:
          serviceName: department
          servicePort: 8080
      - path: /organization
        backend:
          serviceName: organization
          servicePort: 8080

You have to execute the following command to apply the configuration visible above to the Kubernetes cluster.

$ kubectl apply -f kubernetes\ingress.yaml

For testing this solution locally we have to insert the mapping between IP address and hostname set in ingress definition inside hosts file as shown below. After it we can services through ingress using defined hostname just like that: http://microservices.info/employee.

192.168.99.100 microservices.info

You can check the details of created ingress just by executing command kubectl describe ing gateway-ingress.
micro-kube-2

6. Enabling API specification on gateway using Swagger2

Ok, what if we would like to expose single swagger documentation for all microservices deployed on Kubernetes? Well, here the things are getting complicated… We can run container with Swagger UI, and map all paths exposed by the ingress manually, but it is rather not a good solution…
In that case we can use Spring Cloud Kubernetes Ribbon one more time – this time together with Spring Cloud Netflix Zuul. Zuul will act as gateway only for serving Swagger API.
Here’s the list of dependencies used in my gateway-service project.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-zuul</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger-ui</artifactId>
	<version>2.9.2</version>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.9.2</version>
</dependency>

Kubernetes discovery client will detect all services exposed on cluster. We would like to display documentation only for our three microservices. That’s why I defined the following routes for Zuul.

zuul:
  routes:
    department:
      path: /department/**
    employee:
      path: /employee/**
    organization:
      path: /organization/**

Now we can use ZuulProperties bean to get routes addresses from Kubernetes discovery, and configure them as Swagger resources as shown below.

@Configuration
public class GatewayApi {

	@Autowired
	ZuulProperties properties;

	@Primary
	@Bean
	public SwaggerResourcesProvider swaggerResourcesProvider() {
		return () -> {
			List resources = new ArrayList();
			properties.getRoutes().values().stream()
					.forEach(route -> resources.add(createResource(route.getId(), "2.0")));
			return resources;
		};
	}

	private SwaggerResource createResource(String location, String version) {
		SwaggerResource swaggerResource = new SwaggerResource();
		swaggerResource.setName(location);
		swaggerResource.setLocation("/" + location + "/v2/api-docs");
		swaggerResource.setSwaggerVersion(version);
		return swaggerResource;
	}

}

Application gateway-service should be deployed on cluster the same as other applications. You can the list of running service by executing command kubectl get svc. Swagger documentation is available under address http://192.168.99.100:31237/swagger-ui.html.
micro-kube-3

Conclusion

I’m actually rooting for Spring Cloud Kubernetes project, which is still at the incubation stage. Kubernetes popularity as a platform is rapidly growing during some last months, but it still has some weaknesses. One of them is inter-service communication. Kubernetes doesn’t give us many mechanisms out-of-the-box, which allows configure more advanced rules. This a reason for creating frameworks for service mesh on Kubernetes like Istio or Linkerd. While these projects are still relatively new solutions, Spring Cloud is stable, opinionated framework. Why not to use to provide service discovery, inter-service communication or load balancing? Thanks to Spring Cloud Kubernetes it is possible.

Local Continuous Delivery Environment with Docker and Jenkins

In this article I’m going to show you how to setup continuous delivery environment for building Docker images of our Java applications on the local machine. Our environment will consists of Gitlab (optional, otherwise you can use hosted GitHub), Jenkins master, Jenkins JNLP slave with Docker, and private Docker registry. All those tools will be run locally using their Docker images. Thanks to that you will be able to easily test it on your laptop, and then configure the same environment on production deployed on multiple servers or VMs. Let’s take a look on the architecture of the proposed solution.

art-docker-1

1. Running Jenkins Master

We use the latest Jenkins LTS image. Jenkins Web Dashboard is exposed on port 38080. Slave agents may connect master on default 50000 JNLP (Java Web Start) port.

$ docker run -d --name jenkins -p 38080:8080 -p 50000:50000 jenkins/jenkins:lts

After starting, you have to execute command docker logs jenkins in order to obtain an initial admin password. Find the following fragment in the logs, copy your generated password and paste in Jenkins start page available at http://192.168.99.100:38080.

art-docker-2

We have to install some Jenkins plugins to be able to checkout project from Git repository, build application from source code using Maven, and finally build and push Docker image to a private registry. Here’s a list of required plugins:

  • Git Plugin – this plugin allows to use Git as a build SCM
  • Maven Integration Plugin – this plugin provides advanced integration for Maven 2/3
  • Pipeline Plugin – this is a suite of plugins that allows you to create continuous delivery pipelines as a code, and run them in Jenkins
  • Docker Pipeline Plugin – this plugin allows you to build and use Docker containers from pipelines

2. Building Jenkins Slave

Pipelines are usually run on different machine than machine with master node. Moreover, we need to have Docker engine installed on that slave machine to be able to build Docker images. Although, there are some ready Docker images with Docker-in-Docker and Jenkins client agent, I have never find the image with JDK, Maven, Git and Docker installed. This is most commonly used tools when building images for your microservices, so it is definitely worth to have such an image with Jenkins image prepared.

Here’s the Dockerfile with Jenkins Docker-in-Docker slave with Git, Maven and OpenJDK installed. I used Docker-in-Docker as a base image (1). We can override some properties when running our container. You will probably have to override default Jenkins master address (2) and slave secret key (3). The rest of parameters is optional, but you can even decide to use external Docker daemon by overriding DOCKER_HOST environment variable. We also download and install Maven (4) and create user with special sudo rights for running Docker (5). Finally we run entrypoint.sh script, which starts Docker daemon and Jenkins agent (6).

FROM docker:18-dind # (1)
MAINTAINER Piotr Minkowski
ENV JENKINS_MASTER http://localhost:8080 # (2)
ENV JENKINS_SLAVE_NAME dind-node
ENV JENKINS_SLAVE_SECRET "" # (3)
ENV JENKINS_HOME /home/jenkins
ENV JENKINS_REMOTING_VERSION 3.17
ENV DOCKER_HOST tcp://0.0.0.0:2375
RUN apk --update add curl tar git bash openjdk8 sudo

ARG MAVEN_VERSION=3.5.2 # (4)
ARG USER_HOME_DIR="/root"
ARG SHA=707b1f6e390a65bde4af4cdaf2a24d45fc19a6ded00fff02e91626e3e42ceaff
ARG BASE_URL=https://apache.osuosl.org/maven/maven-3/${MAVEN_VERSION}/binaries

RUN mkdir -p /usr/share/maven /usr/share/maven/ref \
  && curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \
  && echo "${SHA}  /tmp/apache-maven.tar.gz" | sha256sum -c - \
  && tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \
  && rm -f /tmp/apache-maven.tar.gz \
  && ln -s /usr/share/maven/bin/mvn /usr/bin/mvn

ENV MAVEN_HOME /usr/share/maven
ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2"
# (5)
RUN adduser -D -h $JENKINS_HOME -s /bin/sh jenkins jenkins && chmod a+rwx $JENKINS_HOME
RUN echo "jenkins ALL=(ALL) NOPASSWD: /usr/local/bin/dockerd" > /etc/sudoers.d/00jenkins && chmod 440 /etc/sudoers.d/00jenkins
RUN echo "jenkins ALL=(ALL) NOPASSWD: /usr/local/bin/docker" > /etc/sudoers.d/01jenkins && chmod 440 /etc/sudoers.d/01jenkins
RUN curl --create-dirs -sSLo /usr/share/jenkins/slave.jar http://repo.jenkins-ci.org/public/org/jenkins-ci/main/remoting/$JENKINS_REMOTING_VERSION/remoting-$JENKINS_REMOTING_VERSION.jar && chmod 755 /usr/share/jenkins && chmod 644 /usr/share/jenkins/slave.jar

COPY entrypoint.sh /usr/local/bin/entrypoint
VOLUME $JENKINS_HOME
WORKDIR $JENKINS_HOME
USER jenkins
ENTRYPOINT ["/usr/local/bin/entrypoint"] # (6)

Here’s the script entrypoint.sh.

#!/bin/sh
set -e
echo "starting dockerd..."
sudo dockerd --host=unix:///var/run/docker.sock --host=$DOCKER_HOST --storage-driver=vfs &
echo "starting jnlp slave..."
exec java -jar /usr/share/jenkins/slave.jar \
	-jnlpUrl $JENKINS_URL/computer/$JENKINS_SLAVE_NAME/slave-agent.jnlp \
	-secret $JENKINS_SLAVE_SECRET

The source code with image definition is available on GitHub. You can clone the repository https://github.com/piomin/jenkins-slave-dind-jnlp.git, build image and then start container using the following commands.

$ docker build -t piomin/jenkins-slave-dind-jnlp .
$ docker run --privileged -d --name slave -e JENKINS_SLAVE_SECRET=5664fe146104b89a1d2c78920fd9c5eebac3bd7344432e0668e366e2d3432d3e -e JENKINS_SLAVE_NAME=dind-node-1 -e JENKINS_URL=http://192.168.99.100:38080 piomin/jenkins-slave-dind-jnlp

Building it is just an optional step, because image is already available on my Docker Hub account.

art-docker-3

3. Enabling Docker-in-Docker Slave

To add new slave node you need to navigate to section Manage Jenkins -> Manage Nodes -> New Node. Then define permanent node with name parameter filled. The most suitable name is default name declared inside Docker image definition – dind-node. You also have to set remote root directory, which should be equal to path defined inside container for JENKINS_HOME environment variable. In my case it is /home/jenkins. The slave node should be launched via Java Web Start (JNLP).

art-docker-4

New node is visible on the list of nodes as disabled. You should click in order to obtain its secret key.

art-docker-5

Finally, you may run your slave container using the following command containing secret copied from node’s panel in Jenkins Web Dashboard.

$ docker run --privileged -d --name slave -e JENKINS_SLAVE_SECRET=fd14247b44bb9e03e11b7541e34a177bdcfd7b10783fa451d2169c90eb46693d -e JENKINS_URL=http://192.168.99.100:38080 piomin/jenkins-slave-dind-jnlp

If everything went according to plan you should see enabled node dind-node in the node’s list.

art-docker-6

4. Setting up Docker Private Registry

After deploying Jenkins master and slave, there is the last required element in architecture that has to be launched – private Docker registry. Because we will access it remotely (from Docker-in-Docker container) we have to configure secure TLS/SSL connection. To achieve it we should first generate TLS certificate and key. We can use openssl tool for it. We begin from generating a private key.

$ openssl genrsa -des3 -out registry.key 1024

Then, we should generate a certificate request file (CSR) by executing the following command.

$ openssl req -new -key registry.key -out registry.csr

Finally, we can generate a self-signed SSL certificate that is valid for 1 year using openssl command as shown below.

$ openssl x509 -req -days 365 -in registry.csr -signkey registry.key -out registry.crt

Don’t forget to remove passphrase from your private key.

$ openssl rsa -in registry.key -out registry-nopass.key -passin pass:123456

You should copy generated .key and .crt files to your docker machine. After that you may run Docker registry using the following command.

docker run -d -p 5000:5000 --restart=always --name registry -v /home/docker:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.crt -e REGISTRY_HTTP_TLS_KEY=/certs/registry-nopass.key registry:2

If a registry has been successfully started you should able to access it over HTTPS by calling address https://192.168.99.100:5000/v2/_catalog from your web browser.

5. Creating application Dockerfile

The sample applications source code is available on GitHub in repository sample-spring-microservices-new (https://github.com/piomin/sample-spring-microservices-new.git). There are some modules with microservices. Each of them has Dockerfile created in the root directory. Here’s typical Dockerfile for our microservice built on top of Spring Boot.

FROM openjdk:8-jre-alpine
ENV APP_FILE employee-service-1.0-SNAPSHOT.jar
ENV APP_HOME /app
EXPOSE 8090
COPY target/$APP_FILE $APP_HOME/
WORKDIR $VERTICLE_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

6. Building pipeline through Jenkinsfile

This step is the most important phase of our exercise. We will prepare pipeline definition, which combines together all the currently discussed tools and solutions. This pipeline definition is a part of every sample application source code. The change in Jenkinsfile is treated the same as a change in the source code responsible for implementing business logic.
Every pipeline is divided into stages. Every stage defines a subset of tasks performed through the entire pipeline. We can select the node, which is responsible for executing pipeline’s steps or leave it empty to allow random selection of the node. Because we have already prepared dedicated node with Docker, we force pipeline to being built by that node. In the first stage called Checkout we pull the source code from Git repository (1). Then we build an application binary using Maven command (2). Once the fat JAR file has been prepared we may proceed to building application’s Docker image (3). We use methods provided by Docker Pipeline Plugin. Finally, we push the Docker image with fat JAR file to secure private Docker registry (4). Such an image may be accessed by any machine that has Docker installed and has access to our Docker registry. Here’s the full code of Jenkinsfile prepared for module config-service.

node('dind-node') {
    stage('Checkout') { # (1)
      git url: 'https://github.com/piomin/sample-spring-microservices-new.git', credentialsId: 'piomin-github', branch: 'master'
    }
    stage('Build') { # (2)
      dir('config-service') {
        sh 'mvn clean install'
        def pom = readMavenPom file:'pom.xml'
        print pom.version
        env.version = pom.version
        currentBuild.description = "Release: ${env.version}"
      }
    }
    stage('Image') {
      dir ('config-service') {
        docker.withRegistry('https://192.168.99.100:5000') {
          def app = docker.build "piomin/config-service:${env.version}" # (3)
          app.push() # (4)
        }
      }
    }
}

7. Creating Pipeline in Jenkins Web Dashboard

After preparing application’s source code, Dockerfile and Jenkinsfile the only thing left is to create pipeline using Jenkins UI. We need to select New Item -> Pipeline and type the name of our first Jenkins pipeline. Then go to Configure panel and select Pipeline script from SCM in Pipeline section. Inside the following form we should fill an address of Git repository, user credentials and a location of Jenkinsfile.

art-docker-7

8. Configure GitLab WebHook (Optionally)

If you run GitLab locally using its Docker image you will be able to configure webhook, which triggers run of your pipeline after pushing changes to Git repository. To run GitLab using Docker execute the following command.

$ docker run -d --name gitlab -p 10443:443 -p 10080:80 -p 10022:22
gitlab/gitlab-ce:latest

Before configuring webhook in GitLab Dashboard we need to enable this feature for Jenkins pipeline. To achieve it we should first install GitLab Plugin.

art-docker-8

Then, you should come back to the pipeline’s configuration panel and enable GitLab build trigger. After that, webhook will be available for our sample pipeline called config-service-pipeline under URL http://192.168.99.100:38080/project/config-service-pipeline as shown in the following picture.

art-docker-9

Before proceeding to configuration of webhook in GitLab Dashboard you should retrieve your Jenkins user API token. To achieve it go to profile panel, select Configure and click button Show API Token.

art-docker-10

To add a new WebHook for your Git repository, you need to go to the section Settings -> Integrations and then fill the URL field with webhook address copied from Jenkins pipeline. Then paste Jenkins user API token into field Secret Token. Leave the Push events checkbox selected.

art-docker-11

9. Running pipeline

Now, we may finally run our pipeline. If you use GitLab Docker container as Git repository platform you just have to push changes in the source code. Otherwise you have to manually start build of pipeline. The first build will take a few minutes, because Maven has to download dependencies required for building an application. If everything will end with success you should see the following result on your pipeline dashboard.

art-docker-13

You can check out the list of images stored in your private Docker registry by calling the following HTTP API endpoint in your web browser: https://192.168.99.100:5000/v2/_catalog.

art-docker-12

Testing microservices on OpenShift using Arquillian Cube

I had a touch with Arquillian framework for the first time when I was building the automated end-to-end tests for JavaEE based applications. At that time testing applications deployed on JavaEE servers was not very comfortable. Arquillian came with nice solution for that problem. It has been providing useful mechanisms for testing EJBs deployed on an embedded application server.
Currently, Arquillian provides multiple modules dedicated for different technologies and use cases. One of these modules is Arquillian Cube. With this extension you can create integration/functional tests running on Docker containers or even more advanced orchestration platforms like Kubernetes or OpenShift.
In this article I’m going to show you how to use Arquillian Cube for building integration tests for applications running on OpenShift platform. All the examples would be deployed locally on Minishift. Here’s the full list of topics covered in this article:

  • Using Arquillian Cube for deploying, and running applications on Minishift
  • Testing applications deployed on Minishift by calling their REST API exposed using OpenShift routes
  • Testing inter-service communication between deployed applications basing on Kubernetes services

Before reading this article it is worth to consider reading two of my previous articles about Kubernetes and OpenShift:

The following picture illustrates the architecture of currently discussed solution. We will build and deploy two sample applications on Minishift. They integrate with NoSQL database, which is also ran as a service on OpenShift platform.

arquillian-1

Now, we may proceed to the development.

1. Including Arquillian Cube dependencies

Before including dependencies to Arquillian Cube libraries we should define dependency management section in our pom.xml. It should contain BOM of Arquillian framework and also of its Cube extension.

<dependencyManagement>
     <dependencies>
          <dependency>
                <groupId>org.arquillian.cube</groupId>
                <artifactId>arquillian-cube-bom</artifactId>
                <version>1.15.3</version>
                <scope>import</scope>
                <type>pom</type>
          </dependency>
          <dependency>
                <groupId>org.jboss.arquillian</groupId>
                <artifactId>arquillian-bom</artifactId>
                <version>1.4.0.Final</version>
                <scope>import</scope>
                <type>pom</type>
          </dependency>
     </dependencies>
</dependencyManagement>

Here’s the list of libraries used in my sample project. The most important thing is to include starter for Arquillian Cube OpenShift extension, which contains all required dependencies. It is also worth to include arquillian-cube-requirement artifact if you would like to annotate test class with @RunWith(ArquillianConditionalRunner.class), and openshift-client in case you would like to use Fabric8 OpenShiftClient.

<dependency>
     <groupId>org.jboss.arquillian.junit</groupId>
     <artifactId>arquillian-junit-container</artifactId>
     <version>1.4.0.Final</version>
     <scope>test</scope>
</dependency>
<dependency>
     <groupId>org.arquillian.cube</groupId>
     <artifactId>arquillian-cube-requirement</artifactId>
     <scope>test</scope>
</dependency>
<dependency>
     <groupId>org.arquillian.cube</groupId>
     <artifactId>arquillian-cube-openshift-starter</artifactId>
     <scope>test</scope>
</dependency>
<dependency>
     <groupId>io.fabric8</groupId>
     <artifactId>openshift-client</artifactId>
     <version>3.1.12</version>
     <scope>test</scope>
</dependency>

2. Running Minishift

I gave you a detailed instruction how to run Minishift locally in my previous articles about OpenShift. Here’s the full list of commands that should be executed in order to start Minishift, reuse Docker daemon managed by Minishift and create test namespace (project).

$ minishift start --vm-driver=virtualbox --memory=2G
$ minishift docker-env
$ minishift oc-env
$ oc login -u developer -p developer
$ oc new-project sample-deployment

We also have to create Mongo database service on OpenShift. OpenShift platform provides an easily way of deploying built-in services via web console available at https://192.168.99.100:8443. You can select there the required service on main dashboard, and just confirm the installation using default properties. Otherwise, you would have to provide YAML template with deployment configuration, and apply it to Minishift using oc command. YAML file will be also required if you decide to recreate namespace on every single test case (explained in the subsequent text in Step 3). I won’t paste here content of the template with configuration for creating MongoDB service on Minishift. This file is available in my GitHub repository in the /openshift/mongo-deployment.yaml file. To access that file you need to clone repository sample-vertx-kubernetes and switch to branch openshift (https://github.com/piomin/sample-vertx-kubernetes/tree/openshift-tests). It contains definitions of secret, persistentVolumeClaim, deploymentConfig and service.

arquillian-2

3. Configuring connection with Minishift for Arquillian

All the Arquillian configuration settings should be provided in arquillian.xml file located in src/test/resources directory. When running Arquillian tests on Minishift you generally have two approaches that may be applied. You can create new namespace per every test suite and then remove it after the test or just use the existing one, and then remove all the created components within the selected namespace. First approach is set by default for every test until you modify it inside Arquillian configuration file using namespace.use.existing and namespace.use.current properties.

<extension qualifier="openshift">
	<property name="namespace.use.current">true</property>
	<property name="namespace.use.existing">sample-deployment</property>
	<property name="kubernetes.master">https://192.168.99.100:8443</property>
	<property name="cube.auth.token">EMNHP8QIB4A_VU4kE_vQv8k9he_4AV3GTltrzd06yMU</property>
</extension>

You also have to set Kubernetes master address and API token. In order to obtain token just run the following command.

$ oc whoami -t
EMNHP8QIB4A_VU4kE_vQv8k9he_4AV3GTltrzd06yMU

4. Building Arquillian JUnit test

Every JUnit test class should be annotated with @RequiresOpenshift. It should also have runner set. In this case it is ArquillianConditionalRunner. The test method testCustomerRoute applies the configuration passed inside file deployment.yaml, which is assigned to the method using @Template annotation.
The important part of this unit test is route’s URL declaration. We have to annotate it with the following annotation:

  • @RouteURL – it searches for a route with a name defined using value parameter and inject it into URL object instance
  • @AwaitRoute – if you do not declare this annotation the test will finish just after running, because deployment on OpenShift is processed asynchronously. @AwaitRoute will force test to wait until route is available on Minishift. We can set the timeout of waiting for route (in this case it is 2 minutes) and route’s path. Especially route’s path is very important here, without it our test won’t locate the route and finished with 2 minutes timeout.

The test method is very simple. In fact, I only send POST request with JSON object to the endpoint assigned to the customer-route route and verify if HTTP status code is 200. Because I had a problem with injecting route’s URL (in fact it doesn’t work for my sample with Minishift v3.9.0, while it works with Minishift v3.7.1) I needed to prepare it manually in the code. If it works properly we could use URL url instance for that.

@Category(RequiresOpenshift.class)
@RequiresOpenshift
@RunWith(ArquillianConditionalRunner.class)
public class CustomerServiceApiTest {

    private static final Logger LOGGER = LoggerFactory.getLogger(CustomerServiceApiTest.class);

    @ArquillianResource
    OpenShiftAssistant assistant;
    @ArquillianResource
    OpenShiftClient client;

    @RouteURL(value = "customer-route")
    @AwaitRoute(timeoutUnit = TimeUnit.MINUTES, timeout = 2, path = "/customer")
    private URL url;

    @Test
    @Template(url = "classpath:deployment.yaml")
    public void testCustomerRoute() {
        OkHttpClient httpClient = new OkHttpClient();
        RequestBody body = RequestBody.create(MediaType.parse("application/json"), "{\"name\":\"John Smith\", \"age\":33}");
        Request request = new Request.Builder().url("http://customer-route-sample-deployment.192.168.99.100.nip.io/customer").post(body).build();
        try {
            Response response = httpClient.newCall(request).execute();
            LOGGER.info("Test: response={}", response.body().string());
            Assert.assertNotNull(response.body());
            Assert.assertEquals(200, response.code());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

5. Preparing deployment configuration

Before running the test we have to prepare template with configuration, which is loaded by Arquillian Cube using @Template annotation. We need to create deploymentConfig, inject there MongoDB credentials stored in secret object, and finally expose the service outside container using route object.

kind: Template
apiVersion: v1
metadata:
  name: customer-template
objects:
  - kind: ImageStream
    apiVersion: v1
    metadata:
      name: customer-image
    spec:
      dockerImageRepository: piomin/customer-vertx-service
  - kind: DeploymentConfig
    apiVersion: v1
    metadata:
      name: customer-service
    spec:
      template:
        metadata:
          labels:
            name: customer-service
        spec:
          containers:
          - name: customer-vertx-service
            image: piomin/customer-vertx-service
            ports:
            - containerPort: 8090
              protocol: TCP
            env:
            - name: DATABASE_USER
              valueFrom:
                secretKeyRef:
                  key: database-user
                  name: mongodb
            - name: DATABASE_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: database-password
                  name: mongodb
            - name: DATABASE_NAME
              valueFrom:
                secretKeyRef:
                  key: database-name
                  name: mongodb
      replicas: 1
      triggers:
      - type: ConfigChange
      - type: ImageChange
        imageChangeParams:
          automatic: true
          containerNames:
          - customer-vertx-service
          from:
            kind: ImageStreamTag
            name: customer-image:latest
      strategy:
        type: Rolling
      paused: false
      revisionHistoryLimit: 2
      minReadySeconds: 0
  - kind: Service
    apiVersion: v1
    metadata:
      name: customer-service
    spec:
      ports:
      - name: "web"
        port: 8090
        targetPort: 8090
      selector:
        name: customer-service
  - kind: Route
    apiVersion: v1
    metadata:
      name: customer-route
    spec:
      path: "/customer"
      to:
        kind: Service
        name: customer-service

6. Testing inter-service communication

In the sample project the communication with other microservices is realized by Vert.x WebClient. It takes Kubernetes service name and its container port as parameters. It is implemented inside customer-service by AccountClient, which is then invoked inside Vert.x HTTP route implementation. Here’s AccountClient implementation.

public class AccountClient {

	private static final Logger LOGGER = LoggerFactory.getLogger(AccountClient.class);
	
	private Vertx vertx;

	public AccountClient(Vertx vertx) {
		this.vertx = vertx;
	}
	
	public AccountClient findCustomerAccounts(String customerId, Handler<AsyncResult<List>> resultHandler) {
		WebClient client = WebClient.create(vertx);
		client.get(8095, "account-service", "/account/customer/" + customerId).send(res2 -> {
			LOGGER.info("Response: {}", res2.result().bodyAsString());
			List accounts = res2.result().bodyAsJsonArray().stream().map(it -> Json.decodeValue(it.toString(), Account.class)).collect(Collectors.toList());
			resultHandler.handle(Future.succeededFuture(accounts));
		});
		return this;
	}
	
}

Endpoint GET /account/customer/:customerId exposed by account-service is called within implementation of method GET /customer/:id exposed by customer-service. This time we create new namespace instead using the existing one. That’s why we have to apply MongoDB deployment configuration before applying configuration of sample services. We also need to upload configuration of account-service that is provided inside account-deployment.yaml file. The rest part of JUnit test is pretty similar to the test described in Step 4. It waits until customer-route is available on Minishift. The only differences are in calling URL and dynamic injection of namespace into route’s URL.

@Category(RequiresOpenshift.class)
@RequiresOpenshift
@RunWith(ArquillianConditionalRunner.class)
@Templates(templates = {
        @Template(url = "classpath:mongo-deployment.yaml"),
        @Template(url = "classpath:deployment.yaml"),
        @Template(url = "classpath:account-deployment.yaml")
})
public class CustomerCommunicationTest {

    private static final Logger LOGGER = LoggerFactory.getLogger(CustomerCommunicationTest.class);

    @ArquillianResource
    OpenShiftAssistant assistant;

    String id;
    
    @RouteURL(value = "customer-route")
    @AwaitRoute(timeoutUnit = TimeUnit.MINUTES, timeout = 2, path = "/customer")
    private URL url;

    // ...

    @Test
    public void testGetCustomerWithAccounts() {
        LOGGER.info("Route URL: {}", url);
        String projectName = assistant.getCurrentProjectName();
        OkHttpClient httpClient = new OkHttpClient();
        Request request = new Request.Builder().url("http://customer-route-" + projectName + ".192.168.99.100.nip.io/customer/" + id).get().build();
        try {
            Response response = httpClient.newCall(request).execute();
            LOGGER.info("Test: response={}", response.body().string());
            Assert.assertNotNull(response.body());
            Assert.assertEquals(200, response.code());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

}

You can run the test using your IDE or just by executing command mvn clean install.

Conclusion

Arquillian Cube comes with gentle solution for integration testing over Kubernetes and OpenShift platforms. It is not difficult to prepare and upload configuration with database and microservices and then deploy it on OpenShift node. You can event test communication between microservices just by deploying dependent application with OpenShift template.

Quick guide to deploying Java apps on OpenShift

In this article I’m going to show you how to deploy your applications on OpenShift (Minishift), connect them with other services exposed there or use some other interesting deployment features provided by OpenShift. Openshift is built on top of Docker containers and the Kubernetes container cluster orchestrator. Currently, it is the most popular enterprise platform basing on those two technologies, so it is definitely worth examining it in more details.

1. Running Minishift

We use Minishift to run a single-node OpenShift cluster on the local machine. The only prerequirement before installing MiniShift is the necessity to have a virtualization tool installed. I use Oracle VirtualBox as a hypervisor, so I should set --vm-driver parameter to virtualbox in my running command.

$  minishift start --vm-driver=virtualbox --memory=3G

2. Running Docker

It turns out that you can easily reuse the Docker daemon managed by Minishift, in order to be able to run Docker commands directly from your command line, without any additional installations. To achieve this just run the following command after starting Minishift.

@FOR /f "tokens=* delims=^L" %i IN ('minishift docker-env') DO @call %i

3. Running OpenShift CLI

The last tool, that is required before starting any practical exercise with Minishift is CLI. CLI is available under command oc. To enable it on your command-line run the following commands.

$ minishift oc-env
$ SET PATH=C:\Users\minkowp\.minishift\cache\oc\v3.9.0\windows;%PATH%
$ REM @FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i

Alternatively you can use OpenShift web console which is available under port 8443. On my Windows machine it is by default launched under address 192.168.99.100.

4. Building Docker images of the sample applications

I prepared the two sample applications that are used for the purposes of presenting OpenShift deployment process. These are simple Java, Vert.x applications that provide HTTP API and store data in MongoDB. However, a technology is not very important now. We need to build Docker images with these applications. The source code is available on GitHub (https://github.com/piomin/sample-vertx-kubernetes.git) in branch openshift (https://github.com/piomin/sample-vertx-kubernetes/tree/openshift). Here’s sample Dockerfile for account-vertx-service.

FROM openjdk:8-jre-alpine
ENV VERTICLE_FILE account-vertx-service-1.0-SNAPSHOT.jar
ENV VERTICLE_HOME /usr/verticles
ENV DATABASE_USER mongo
ENV DATABASE_PASSWORD mongo
ENV DATABASE_NAME db
EXPOSE 8095
COPY target/$VERTICLE_FILE $VERTICLE_HOME/
WORKDIR $VERTICLE_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $VERTICLE_FILE"]

Go to account-vertx-service directory and run the following command to build image from a Dockerfile visible above.

$ docker build -t piomin/account-vertx-service .

The same step should be performed for customer-vertx-service. After it you have two images built, both in the same version latest, which now can be deployed and ran on Minishift.

5. Preparing OpenShift deployment descriptor

When working with OpenShift, the first step of application’s deployment is to create YAML configuration file. This file contains basic information about deployment like containers used for running applications (1), scaling (2), triggers that drive automated deployments in response to events (3) or a strategy of deploying your pods on the platform (4).

Deployment configurations can be managed with the oc command like any other resource. You can create new configuration or update the existing one by using oc apply command.

$ oc apply -f account-deployment.yaml

You can be surprised a little, but this command does not trigger any build and does not start the pods. In fact, you have only created a resource of type deploymentConfig, which may be describes deployment process. You can start this process using some other oc commands, but first let’s take a closer look on the resources required by our application.

6. Injecting environment variables

As I have mentioned before, our sample applications uses external datasource. They need to open the connection to the existing MongoDB instance in order to store there data passed using HTTP endpoints exposed by the application. Here’s MongoVerticle class, which is responsible for establishing client connection with MongoDB. It uses environment variables for setting security credentials and database name.

public class MongoVerticle extends AbstractVerticle {

	@Override
	public void start() throws Exception {
		ConfigStoreOptions envStore = new ConfigStoreOptions()
				.setType("env")
				.setConfig(new JsonObject().put("keys", new JsonArray().add("DATABASE_USER").add("DATABASE_PASSWORD").add("DATABASE_NAME")));
		ConfigRetrieverOptions options = new ConfigRetrieverOptions().addStore(envStore);
		ConfigRetriever retriever = ConfigRetriever.create(vertx, options);
		retriever.getConfig(r -> {
			String user = r.result().getString("DATABASE_USER");
			String password = r.result().getString("DATABASE_PASSWORD");
			String db = r.result().getString("DATABASE_NAME");
			JsonObject config = new JsonObject();
			config.put("connection_string", "mongodb://" + user + ":" + password + "@mongodb/" + db);
			final MongoClient client = MongoClient.createShared(vertx, config);
			final AccountRepository service = new AccountRepositoryImpl(client);
			ProxyHelper.registerService(AccountRepository.class, vertx, service, "account-service");
		});
	}

}

MongoDB is available in the OpenShift’s catalog of predefined Docker images. You can easily deploy it on your Minishift instance just by clicking “MongoDB” icon in “Catalog” tab. Username and password will be automatically generated if you do not provide them during deployment setup. All the properties are available as deployment’s environment variables and are stored as secrets/mongodb, where mongodb is the name of the deployment.

openshift-1

Environment variables can be easily injected into any other deployment using oc set command, and therefore they are injected into the pod after performing deployment process. The following command inject all secrets assigned to mongodb deployment to the configuration of our sample application’s deployment.

$ oc set env --from=secrets/mongodb dc/account-service

7. Importing Docker images to OpenShift

A deployment configuration is ready. So, in theory we could have start deployment process. However, we have back for a moment to the deployment config defined in the Step 5. We defined there two triggers that causes a new replication controller to be created, what results in deploying new version of pod. First of them is a configuration change trigger that fires whenever changes are detected in the pod template of the deployment configuration (ConfigChange). The second of them, image change trigger (ImageChange) fires when a new version of the Docker image is pushed to the repository. To be able to watch if an image in repository has been changed, we have to define and create image stream. Such an image stream does not contain any image data, but present a single virtual view of related images, something similar to an image repository. Inside deployment config file we referred to image stream account-vertx-service, so the same name should be provided inside image stream definition. In turn, when setting the spec.dockerImageRepository field we define the Docker pull specification for the image.

Finally, we can create resource on OpenShift platform.

$ oc apply -f account-image.yaml

8. Running deployment

Once a deployment configuration has been prepared, and Docker images has been succesfully imported into repository managed by OpenShift instance, we may trigger the build using the following oc command.

$ oc rollout latest dc/account-service
$ oc rollout latest dc/customer-service

If everything goes fine the new pods should be started for the defined deployments. You can easily check it out using OpenShift web console.

9. Updating image stream

We have already created two image streams related to the Docker repositories. Here’s the screen from OpenShift web console that shows the list of available image streams.

openshift-images

To be able to push a new version of an image to OpenShift internal Docker registry we should first perform docker login against this registry using user’s authentication token. To obtain the token from OpenShift use oc whoami command, and then pass it to your docker login command with -p parameter.

$ oc whoami -t
Sz9_TXJQ2nyl4fYogR6freb3b0DGlJ133DVZx7-vMFM
$ docker login -u developer -p Sz9_TXJQ2nyl4fYogR6freb3b0DGlJ133DVZx7-vMFM https://172.30.1.1:5000

Now, if you perform any change in your application and rebuild your Docker image with latest tag, you have to push that image to image stream on OpenShift. The address of internal registry has been automatically generated by OpenShift, and you can check it out in the image stream’s details. For me, it is 172.30.1.1:5000.

$ docker tag piomin/account-vertx-service 172.30.1.1:5000/sample-deployment/account-vertx-service:latest
$ docker push 172.30.1.1:5000/sample-deployment/account-vertx-service

After pushing new version of Docker image to image stream, a rollout of application is started automatically. Here’s the screen from OpenShift web console that shows the history of account-service application deployments.

openshift-2

Conclusion

I have shown you the further steps of deploying your application on the OpenShift platform. Basing on sample Java application that connects to a database, I illustrated how to inject credentials to that application’s pod entirely transparently for a developer. I also perform an update of application’s Docker image, in order to show how to trigger a new version deployment on image change.

openshift-3

Microservices traffic management using Istio on Kubernetes

I have already described a simple example of route configuration between two microservices deployed on Kubernetes in one of my previous articles: Service Mesh with Istio on Kubernetes in 5 steps. You can refer to this article if you are interested in the basic information about Istio, and its deployment on Kubernetes via Minikube. Today we will create some more advanced traffic management rules basing on the same sample applications as used in the previous article about Istio.

The source code of sample applications is available on GitHub in repository sample-istio-services (https://github.com/piomin/sample-istio-services.git). There are two sample application callme-service and caller-service deployed in two different versions 1.0 and 2.0. Version 1.0 is available in branch v1 (https://github.com/piomin/sample-istio-services/tree/v1), while version 2.0 in the branch v2 (https://github.com/piomin/sample-istio-services/tree/v2). Using these sample applications in different versions I’m going to show you different strategies of traffic management depending on a HTTP header set in the incoming requests.

We may force caller-service to route all the requests to the specific version of callme-service by setting header x-version to v1 or v2. We can also do not set this header in the request what results in splitting traffic between all existing versions of service. If the request comes to version v1 of caller-service the traffic is splitted 50-50 between two instances of callme-service. If the request is received by v2 instance of caller-service 75% traffic is forwarded to version v2 of callme-service, while only 25% to v1. The scenario described above has been illustrated on the following diagram.

istio-advanced-1

Before we proceed to the example, I should say some words about traffic management with Istio. If you have read my previous article about Istio, you would probably know that each rule is assigned to a destination. Rules control a process of requests routing within a service mesh. The one very important information about them,especially for the purposes of the example illustrated on the diagram above, is that multiple rules can be applied to the same destination. The priority of every rule is determined by the precedence field of the rule. There is one principle related to a value of this field: the higher value of this integer field, the greater priority of the rule. As you may probably guess, if there is more than one rule with the same precedence value the order of rules evaluation is undefined. In addition to a destination, we may also define a source of the request in order to restrict a rule only to a specific caller. If there are multiple deployments of a calling service, we can even filter them out by setting source’s label field. Of course, we can also specify the attributes of an HTTP request such as uri, scheme or headers that are used for matching a request with defined rule.

Ok, now let’s take a look on the rule with the highest priority. Its name is callme-service-v1 (1). It applies to callme-service (2),  and has the highest priority in comparison to other rules (3). It is applies only to requests sent by caller-service (4), that contain HTTP header x-version with value v1 (5). This route rule applies only to version v1 of callme-service (6).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v1 # (1)
spec:
  destination:
    name: callme-service # (2)
  precedence: 4 # (3)
  match:
    source:
      name: caller-service # (4)
    request:
      headers:
        x-version:
          exact: "v1" # (5)
  route:
  - labels:
      version: v1 # (6)

Here’s the fragment of the first diagram, which is handled by this route rule.

istio-advanced-7

The next rule callme-service-v2 (1) has a lower priority (2). However, it does not conflicts with first rule, because it applies only to the requests containing x-version header with value v2 (3). It forwards all requests to version v2 of callme-service (4).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v2 # (1)
spec:
  destination:
    name: callme-service
  precedence: 3 # (2)
  match:
    source:
      name: caller-service
    request:
      headers:
        x-version:
          exact: "v2" # (3)
  route:
  - labels:
      version: v2 # (4)

As before, here’s the fragment of the first diagram, which is handled by this route rule.

istio-advanced-6

The rule callme-service-v1-default (1) visible in the code fragment below has a lower priority (2) than two previously described rules. In practice it means that it is executed only if conditions defined in two previous rules were not fulfilled. Such a situation occurs if you do not pass the header x-version inside HTTP request, or it would have diferent value than v1 or v2. The rule visible below applies only to the instance of service labeled with v1 version (3). Finally, the traffic to callme-service is load balanced in propertions 50-50 between two versions of that service (4).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v1-default # (1)
spec:
  destination:
    name: callme-service
  precedence: 2 # (2)
  match:
    source:
      name: caller-service
      labels:
        version: v1 # (3)
  route: # (4)
  - labels:
      version: v1
    weight: 50
  - labels:
      version: v2
    weight: 50

Here’s the fragment of the first diagram, which is handled by this route rule.

istio-advanced-4

The last rule is pretty similar to the previously described callme-service-v1-default. Its name is callme-service-v2-default (1), and it applies only to version v2 of caller-service (3). It has the lowest priority (2), and splits traffic between two version of callme-service in proportions 75-25 in favor of version v2 (4).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v2-default # (1)
spec:
  destination:
    name: callme-service
  precedence: 1 # (2)
  match:
    source:
      name: caller-service
      labels:
        version: v2 # (3)
  route: # (4)
  - labels:
      version: v1
    weight: 25
  - labels:
      version: v2
    weight: 75

The same as before, I have also included the diagram illustrated a behaviour of this rule.

istio-advanced-5

All the rules may be placed inside a single file. In that case they should be separated with line ---. This file is available in code’s repository inside callme-service module as multi-rule.yaml. To deploy all defined rules on Kubernetes just execute the following command.

$ kubectl apply -f multi-rule.yaml

After successful deploy you may check out the list of available rules by running command istioctl get routerule.

istio-advanced-2

Before we will start any tests, we obviously need to have sample applications deployed on Kubernetes. This applications are really simple and pretty similar to the applications used for tests in my previous article about Istio. The controller visible below implements method GET /callme/ping, which prints version of application taken from pom.xml and value of x-version HTTP header received in the request.

@RestController
@RequestMapping("/callme")
public class CallmeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);

	@Autowired
	BuildProperties buildProperties;

	@GetMapping("/ping")
	public String ping(@RequestHeader(name = "x-version", required = false) String version) {
		LOGGER.info("Ping: name={}, version={}, header={}", buildProperties.getName(), buildProperties.getVersion(), version);
		return buildProperties.getName() + ":" + buildProperties.getVersion() + " with version " + version;
	}

}

Here’s the controller class that implements method GET /caller/ping. It prints version of caller-service taken from pom.xml and calls method GET callme/ping exposed by callme-service. It needs to include x-version header to the request when sending it to the downstream service.

@RestController
@RequestMapping("/caller")
public class CallerController {

	private static final Logger LOGGER = LoggerFactory.getLogger(CallerController.class);

	@Autowired
	BuildProperties buildProperties;
	@Autowired
	RestTemplate restTemplate;

	@GetMapping("/ping")
	public String ping(@RequestHeader(name = "x-version", required = false) String version) {
		LOGGER.info("Ping: name={}, version={}, header={}", buildProperties.getName(), buildProperties.getVersion(), version);
		HttpHeaders headers = new HttpHeaders();
		if (version != null)
			headers.set("x-version", version);<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>
		HttpEntity entity = new HttpEntity(headers);
		ResponseEntity response = restTemplate.exchange("http://callme-service:8091/callme/ping", HttpMethod.GET, entity, String.class);
		return buildProperties.getName() + ":" + buildProperties.getVersion() + ". Calling... " + response.getBody() + " with header " + version;
	}

}

Now, we may proceeed to applications build and deployment on Kubernetes. Here are are the further steps.

1. Building appplication

First, switch to branch v1 and build the whole project sample-istio-services by executing mvn clean install command.

2. Building Docker image

The Dockerfiles are placed in the root directory of every application. Build their Docker images by executing the following commands.

$ docker build -t piomin/callme-service:1.0 .
$ docker build -t piomin/caller-service:1.0 .

Alternatively, you may omit this step, because images piomin/callme-service and piomin/caller-service are available on my Docker Hub account.

3. Inject Istio components to Kubernetes deployment file

Kubernetes YAML deployment file is available in the root directory of every application as deployment.yaml. The result of the following command should be saved as separated file, for example deployment-with-istio.yaml.

$ istioctl kube-inject -f deployment.yaml

4. Deployment on Kubernetes

Finally, you can execute well-known kubectl command in order to deploy Docker container with our sample application.

$ kubectl apply -f deployment-with-istio.yaml

Then switch to branch v2, and repeat the steps described above for version 2.0 of the sample applications. The final deployment result is visible on picture below.

istio-advanced-3

One very useful thing when running Istio on Kubernetes is out-of-the-box integration with such tools like Zipkin, Grafana or Prometheus. Istio automatically sends some metrics, that are collected by Prometheus, for example total number of requests in metric istio_request_count. YAML deployment files for these plugins ara available inside directory ${ISTIO_HOME}/install/kubernetes/addons. Before installing Prometheus using kubectl command I suggest to change service type from default ClusterIP to NodePort by adding the line type: NodePort.

apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: 'true'
  labels:
    name: prometheus
  name: prometheus
  namespace: istio-system
spec:
  type: NodePort
  selector:
    app: prometheus
  ports:
  - name: prometheus
    protocol: TCP
    port: 9090

Then we should run command kubectl apply -f prometheus.yaml in order to deploy Prometheus on Kubernetes. The deployment is available inside istio-system namespace. To check the external port of service run the following command. For me, it is available under address http://192.168.99.100:32293.

istio-advanced-14

In the following diagram visualized using Prometheus I filtered out only the requests sent to callme-service. Green color points to requests received by version v2 of the service, while red color points to requests processed by version v1 of the service. Like you can see in this diagram, in the beginning I have sent the requests to caller-service with HTTP header x-version set to value v2, then I didn’t set this header and traffic has been splitted between to deployed instances of the service. Finally I set it to v1. I defined an expression rate(istio_request_count{callme-service.default.svc.cluster.local}[1m]), which returns per-second rate of requests received by callme-service.

istio-advanced-13

Testing

Before sending some test requests to caller-service we need to obtain its address on Kubernetes. After executing the following command you see that it is available under address http://192.168.99.100:32237/caller/ping.

istio-services-16

We have four possible scenarios. In first, when we set header x-version to v1 the request will be always routed to callme-service-v1.

istio-advanced-10

If a header x-version is not included in the requests the traffic will be splitted between callme-service-v1

istio-advanced-11

… and callme-service-v2.

istio-advanced-12

Finally, if we set header x-version to v2 the request will be always routed to callme-service-v2.

istio-advanced-14

Conclusion

Using Istio you can easily create and apply simple and more advanced traffic management rules to the applications deployed on Kubernetes. You can also monitor metrics and traces through the integration between Istio and Zipkin, Prometheus and Grafana.

Service Mesh with Istio on Kubernetes in 5 steps

In this article I’m going to show you some basic and more advanced samples that illustrate how to use Istio platform in order to provide communication between microservices deployed on Kubernetes. Following the description on Istio website it is:

An open platform to connect, manage, and secure microservices. Istio provides an easy way to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, without requiring any changes in service code.

Istio provides mechanisms for traffic management like request routing, discovery, load balancing, handling failures and fault injection. Additionally you may enable istio-auth that provides RBAC (Role-Based Access Control) and Mutual TLS Authentication. In this article we will discuss only about traffic management mechanisms.

Step 1. Installing Istio on Minikube platform

The most comfortable way to test Istio locally on Kubernetes is through Minikube. I have already described how to configure Minikube on your local machine in this article: Microservices with Kubernetes and Docker. When installing Istio on Minikube we should first enable some Minikube’s plugins during startup.

minikube start --extra-config=controller-manager.ClusterSigningCertFile="/var/lib/localkube/certs/ca.crt" --extra-config=controller-manager.ClusterSigningKeyFile="/var/lib/localkube/certs/ca.key" --extra-config=apiserver.Admission.PluginNames=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota

Istio is installed in dedicated namespace called istio-system, but is able to manage services from all other namespaces. First, you should go to release page and download installation file corresponding to your OS. For me it is Windows, and all the next steps will be described with the assumption that we are using exactly this OS. After running Minikube it would be useful to enable Docker on Minikube’s VM. Thanks to that you will be able to execute docker commands.

@FOR /f "tokens=* delims=^L" %i IN ('minikube docker-env') DO @call %i

Now, extract Istio files to your local filesystem. File istioctl.exe, which is available under ${ISTIO_HOME}/bin directory should be added to your PATH. Istio contains some installation files for Kubernetes platform in ${ISTIO_HOME}/install/kubernetes. To install Istio’s core components on Minikube just apply the following YAML definition file.

kubectl apply -f install/kubernetes/istio.yaml

Now, you have Istio’s core components deployed on your Minikube instance. These components are:

Envoy – it is an open-source edge and service proxy, designed for cloud-native application. Istio uses an extended version of the Envoy proxy. If you are interested in some details about Envoy and microservices read my article Envoy Proxy with Microservices, that describes how to integrate Envoy gateway with service discovery.

Mixer – it is a platform-independent component responsible for enforcing access control and usage policies across the service mesh.

Pilot – it provides service discovery for the Envoy sidecars, traffic management capabilities for intelligent routing and resiliency.

The configuration provided inside istio.yaml definition file deploys some pods and services related to the components mentioned above. You can verify the installation using kubectl command or just by visiting Web Dashboard available after executing command minikube dashboard.

istio-2

Step 2. Building sample applications based on Spring Boot

Before we start configure any traffic rules with Istio, we need to create sample applications that will communicate with each other. These are really simple services. The source code of these applications is available on my GitHub account inside repository sample-istio-services. There are two services: caller-service and callme-service. Both of them expose endpoint ping which prints application’s name and version. Both of these values are taken from Spring Boot build-info file, which is generated during application build. Here’s implementation of endpoint GET /callme/ping.

@RestController
@RequestMapping("/callme")
public class CallmeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);

	@Autowired
	BuildProperties buildProperties;

	@GetMapping("/ping")
	public String ping() {
		LOGGER.info("Ping: name={}, version={}", buildProperties.getName(), buildProperties.getVersion());
		return buildProperties.getName() + ":" + buildProperties.getVersion();
	}

}

And here’s implementation of endpoint GET /caller/ping. It calls GET /callme/ping endpoint using Spring RestTemplate. We are assuming that callme-service is available under address callme-service:8091 on Kubernetes. This service is will be exposed inside Minikube node under port 8091.

@RestController
@RequestMapping("/caller")
public class CallerController {

	private static final Logger LOGGER = LoggerFactory.getLogger(CallerController.class);

	@Autowired
	BuildProperties buildProperties;
	@Autowired
	RestTemplate restTemplate;

	@GetMapping("/ping")
	public String ping() {
		LOGGER.info("Ping: name={}, version={}", buildProperties.getName(), buildProperties.getVersion());
		String response = restTemplate.getForObject("http://callme-service:8091/callme/ping", String.class);
		LOGGER.info("Calling: response={}", response);
		return buildProperties.getName() + ":" + buildProperties.getVersion() + ". Calling... " + response;
	}

}

The sample applications have to be started on Docker container. Here’s Dockerfile that is responsible for building image with caller-service application.

FROM openjdk:8-jre-alpine
ENV APP_FILE caller-service-1.0.0-SNAPSHOT.jar
ENV APP_HOME /usr/app
EXPOSE 8090
COPY target/$APP_FILE $APP_HOME/
WORKDIR $APP_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

The similar Dockerfile is available for callme-service. Now, the only thing we have to is to build Docker images.

docker build -t piomin/callme-service:1.0 .
docker build -t piomin/caller-service:1.0 .

There is also version 2.0.0-SNAPSHOT of callme-service available in branch v2. Switch to this branch, build the whole application, and then build docker image with 2.0 tag. Why we need version 2.0? I’ll describe it in the next section.

docker build -t piomin/callme-service:2.0 .

Step 3. Deploying sample applications on Minikube

Before we start deploying our applications on Minikube, let’s take a look on the sample system architecture visible on the following diagram. We are going to deploy callme-service in two versions: 1.0 and 2.0. Application caller-service is just calling callme-service, so I does not know anything about different versions of the target service. If we would like to route traffic between two versions of callme-service in proportions 20% to 80%, we have to configure the proper Istio’s routerule. And also one thing. Because Istio Ingress is not supported on Minikube, we will just Kubernetes Service. If we need to expose it outside Minikube cluster we should set type to NodePort.

istio-1

Let’s proceed to the deployment phase. Here’s deployment definition for callme-service in version 1.0.

apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: NodePort
  ports:
  - port: 8091
    name: http
  selector:
    app: callme-service
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: callme-service
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: callme-service
        version: v1
    spec:
      containers:
      - name: callme-service
        image: piomin/callme-service:1.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8091

Before deploying it on Minikube we have to inject some Istio properties. The command visible below prints a new version of deployment definition enriched with Istio configuration. We may copy it and save as deployment-with-istio.yaml file.

istioctl kube-inject -f deployment.yaml

Now, let’s apply the configuration to Kubernetes.

kubectl apply -f deployment-with-istio.yaml

The same steps should be performed for caller-service, and also for version 2.0 of callme-service. All YAML configuration files are committed together with applications, and are located in the root directory of every application’s module. If you have succesfully deployed all the required components you should see the following elements in your Minikube’s dashboard.

istio-3

Step 4. Applying Istio routing rules

Istio provides a simple Domain-specific language (DSL) that allows you configure some interesting rules that control how requests are routed within your service mesh. I’m going to show you the following rules:

  • Split traffic between different service versions
  • Injecting the delay in the request path
  • Injecting HTTP error as a reponse from service

Here’s sample route rule definition for callme-service. It splits traffic in proportions 20:80 between versions 1.0 and 2.0 of the service. It also adds 3 seconds delay in 10% of the requests, and returns an HTTP 500 error code for 10% of the requests.

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service
spec:
  destination:
    name: callme-service
  route:
  - labels:
      version: v1
    weight: 20
  - labels:
      version: v2
    weight: 80
  httpFault:
    delay:
      percent: 10
      fixedDelay: 3s
    abort:
      percent: 10
      httpStatus: 500

Let’s apply a new route rule to Kubernetes.

kubectl apply -f routerule.yaml

Now, we can easily verify that rule by executing command istioctl get routerule.

istio-6

Step 5. Testing the solution

Before we start testing let’s deploy Zipkin on Minikube. Istio provides deployment definition file zipkin.yaml inside directory ${ISTIO_HOME}/install/kubernetes/addons.

kubectl apply -f zipkin.yaml

Let’s take a look on the list of services deployed on Minikube. API provided by application caller-service is available under port 30873.

istio-4

We may easily test the service for a web browser by calling URL http://192.168.99.100:30873/caller/ping. It prints the name and version of the service, and also the name and version of callme-service invoked by caller-service. Because 80% of traffic is routed to version 2.0 of callme-service you will probably see the following response.

istio-7

However, sometimes version 1.0 of callme-service may be called…

istio-8

… or Istio can simulate HTTP 500 code.

istio-9

You can easily analyze traffic statistics with Zipkin console.

istio-10

Or just take a look on the logs generated by pods.

istio-11