Local Continuous Delivery Environment with Docker and Jenkins

In this article I’m going to show you how to setup continuous delivery environment for building Docker images of our Java applications on the local machine. Our environment will consists of Gitlab (optional, otherwise you can use hosted GitHub), Jenkins master, Jenkins JNLP slave with Docker, and private Docker registry. All those tools will be run locally using their Docker images. Thanks to that you will be able to easily test it on your laptop, and then configure the same environment on production deployed on multiple servers or VMs. Let’s take a look on the architecture of the proposed solution.

art-docker-1

1. Running Jenkins Master

We use the latest Jenkins LTS image. Jenkins Web Dashboard is exposed on port 38080. Slave agents may connect master on default 50000 JNLP (Java Web Start) port.

$ docker run -d --name jenkins -p 38080:8080 -p 50000:50000 jenkins/jenkins:lts

After starting, you have to execute command docker logs jenkins in order to obtain an initial admin password. Find the following fragment in the logs, copy your generated password and paste in Jenkins start page available at http://192.168.99.100:38080.

art-docker-2

We have to install some Jenkins plugins to be able to checkout project from Git repository, build application from source code using Maven, and finally build and push Docker image to a private registry. Here’s a list of required plugins:

  • Git Plugin – this plugin allows to use Git as a build SCM
  • Maven Integration Plugin – this plugin provides advanced integration for Maven 2/3
  • Pipeline Plugin – this is a suite of plugins that allows you to create continuous delivery pipelines as a code, and run them in Jenkins
  • Docker Pipeline Plugin – this plugin allows you to build and use Docker containers from pipelines

2. Building Jenkins Slave

Pipelines are usually run on different machine than machine with master node. Moreover, we need to have Docker engine installed on that slave machine to be able to build Docker images. Although, there are some ready Docker images with Docker-in-Docker and Jenkins client agent, I have never find the image with JDK, Maven, Git and Docker installed. This is most commonly used tools when building images for your microservices, so it is definitely worth to have such an image with Jenkins image prepared.

Here’s the Dockerfile with Jenkins Docker-in-Docker slave with Git, Maven and OpenJDK installed. I used Docker-in-Docker as a base image (1). We can override some properties when running our container. You will probably have to override default Jenkins master address (2) and slave secret key (3). The rest of parameters is optional, but you can even decide to use external Docker daemon by overriding DOCKER_HOST environment variable. We also download and install Maven (4) and create user with special sudo rights for running Docker (5). Finally we run entrypoint.sh script, which starts Docker daemon and Jenkins agent (6).

FROM docker:18-dind # (1)
MAINTAINER Piotr Minkowski
ENV JENKINS_MASTER http://localhost:8080 # (2)
ENV JENKINS_SLAVE_NAME dind-node
ENV JENKINS_SLAVE_SECRET "" # (3)
ENV JENKINS_HOME /home/jenkins
ENV JENKINS_REMOTING_VERSION 3.17
ENV DOCKER_HOST tcp://0.0.0.0:2375
RUN apk --update add curl tar git bash openjdk8 sudo

ARG MAVEN_VERSION=3.5.2 # (4)
ARG USER_HOME_DIR="/root"
ARG SHA=707b1f6e390a65bde4af4cdaf2a24d45fc19a6ded00fff02e91626e3e42ceaff
ARG BASE_URL=https://apache.osuosl.org/maven/maven-3/${MAVEN_VERSION}/binaries

RUN mkdir -p /usr/share/maven /usr/share/maven/ref \
  && curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \
  && echo "${SHA}  /tmp/apache-maven.tar.gz" | sha256sum -c - \
  && tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \
  && rm -f /tmp/apache-maven.tar.gz \
  && ln -s /usr/share/maven/bin/mvn /usr/bin/mvn

ENV MAVEN_HOME /usr/share/maven
ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2"
# (5)
RUN adduser -D -h $JENKINS_HOME -s /bin/sh jenkins jenkins && chmod a+rwx $JENKINS_HOME
RUN echo "jenkins ALL=(ALL) NOPASSWD: /usr/local/bin/dockerd" > /etc/sudoers.d/00jenkins && chmod 440 /etc/sudoers.d/00jenkins
RUN echo "jenkins ALL=(ALL) NOPASSWD: /usr/local/bin/docker" > /etc/sudoers.d/01jenkins && chmod 440 /etc/sudoers.d/01jenkins
RUN curl --create-dirs -sSLo /usr/share/jenkins/slave.jar http://repo.jenkins-ci.org/public/org/jenkins-ci/main/remoting/$JENKINS_REMOTING_VERSION/remoting-$JENKINS_REMOTING_VERSION.jar && chmod 755 /usr/share/jenkins && chmod 644 /usr/share/jenkins/slave.jar

COPY entrypoint.sh /usr/local/bin/entrypoint
VOLUME $JENKINS_HOME
WORKDIR $JENKINS_HOME
USER jenkins
ENTRYPOINT ["/usr/local/bin/entrypoint"] # (6)

Here’s the script entrypoint.sh.

#!/bin/sh
set -e
echo "starting dockerd..."
sudo dockerd --host=unix:///var/run/docker.sock --host=$DOCKER_HOST --storage-driver=vfs &
echo "starting jnlp slave..."
exec java -jar /usr/share/jenkins/slave.jar \
	-jnlpUrl $JENKINS_URL/computer/$JENKINS_SLAVE_NAME/slave-agent.jnlp \
	-secret $JENKINS_SLAVE_SECRET

The source code with image definition is available on GitHub. You can clone the repository https://github.com/piomin/jenkins-slave-dind-jnlp.git, build image and then start container using the following commands.

$ docker build -t piomin/jenkins-slave-dind-jnlp .
$ docker run --privileged -d --name slave -e JENKINS_SLAVE_SECRET=5664fe146104b89a1d2c78920fd9c5eebac3bd7344432e0668e366e2d3432d3e -e JENKINS_SLAVE_NAME=dind-node-1 -e JENKINS_URL=http://192.168.99.100:38080 piomin/jenkins-slave-dind-jnlp

Building it is just an optional step, because image is already available on my Docker Hub account.

art-docker-3

3. Enabling Docker-in-Docker Slave

To add new slave node you need to navigate to section Manage Jenkins -> Manage Nodes -> New Node. Then define permanent node with name parameter filled. The most suitable name is default name declared inside Docker image definition – dind-node. You also have to set remote root directory, which should be equal to path defined inside container for JENKINS_HOME environment variable. In my case it is /home/jenkins. The slave node should be launched via Java Web Start (JNLP).

art-docker-4

New node is visible on the list of nodes as disabled. You should click in order to obtain its secret key.

art-docker-5

Finally, you may run your slave container using the following command containing secret copied from node’s panel in Jenkins Web Dashboard.

$ docker run --privileged -d --name slave -e JENKINS_SLAVE_SECRET=fd14247b44bb9e03e11b7541e34a177bdcfd7b10783fa451d2169c90eb46693d -e JENKINS_URL=http://192.168.99.100:38080 piomin/jenkins-slave-dind-jnlp

If everything went according to plan you should see enabled node dind-node in the node’s list.

art-docker-6

4. Setting up Docker Private Registry

After deploying Jenkins master and slave, there is the last required element in architecture that has to be launched – private Docker registry. Because we will access it remotely (from Docker-in-Docker container) we have to configure secure TLS/SSL connection. To achieve it we should first generate TLS certificate and key. We can use openssl tool for it. We begin from generating a private key.

$ openssl genrsa -des3 -out registry.key 1024

Then, we should generate a certificate request file (CSR) by executing the following command.

$ openssl req -new -key registry.key -out registry.csr

Finally, we can generate a self-signed SSL certificate that is valid for 1 year using openssl command as shown below.

$ openssl x509 -req -days 365 -in registry.csr -signkey registry.key -out registry.crt

Don’t forget to remove passphrase from your private key.

$ openssl rsa -in registry.key -out registry-nopass.key -passin pass:123456

You should copy generated .key and .crt files to your docker machine. After that you may run Docker registry using the following command.

docker run -d -p 5000:5000 --restart=always --name registry -v /home/docker:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.crt -e REGISTRY_HTTP_TLS_KEY=/certs/registry-nopass.key registry:2

If a registry has been successfully started you should able to access it over HTTPS by calling address https://192.168.99.100:5000/v2/_catalog from your web browser.

5. Creating application Dockerfile

The sample applications source code is available on GitHub in repository sample-spring-microservices-new (https://github.com/piomin/sample-spring-microservices-new.git). There are some modules with microservices. Each of them has Dockerfile created in the root directory. Here’s typical Dockerfile for our microservice built on top of Spring Boot.

FROM openjdk:8-jre-alpine
ENV APP_FILE employee-service-1.0-SNAPSHOT.jar
ENV APP_HOME /app
EXPOSE 8090
COPY target/$APP_FILE $APP_HOME/
WORKDIR $VERTICLE_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

6. Building pipeline through Jenkinsfile

This step is the most important phase of our exercise. We will prepare pipeline definition, which combines together all the currently discussed tools and solutions. This pipeline definition is a part of every sample application source code. The change in Jenkinsfile is treated the same as a change in the source code responsible for implementing business logic.
Every pipeline is divided into stages. Every stage defines a subset of tasks performed through the entire pipeline. We can select the node, which is responsible for executing pipeline’s steps or leave it empty to allow random selection of the node. Because we have already prepared dedicated node with Docker, we force pipeline to being built by that node. In the first stage called Checkout we pull the source code from Git repository (1). Then we build an application binary using Maven command (2). Once the fat JAR file has been prepared we may proceed to building application’s Docker image (3). We use methods provided by Docker Pipeline Plugin. Finally, we push the Docker image with fat JAR file to secure private Docker registry (4). Such an image may be accessed by any machine that has Docker installed and has access to our Docker registry. Here’s the full code of Jenkinsfile prepared for module config-service.

node('dind-node') {
    stage('Checkout') { # (1)
      git url: 'https://github.com/piomin/sample-spring-microservices-new.git', credentialsId: 'piomin-github', branch: 'master'
    }
    stage('Build') { # (2)
      dir('config-service') {
        sh 'mvn clean install'
        def pom = readMavenPom file:'pom.xml'
        print pom.version
        env.version = pom.version
        currentBuild.description = "Release: ${env.version}"
      }
    }
    stage('Image') {
      dir ('config-service') {
        docker.withRegistry('https://192.168.99.100:5000') {
          def app = docker.build "piomin/config-service:${env.version}" # (3)
          app.push() # (4)
        }
      }
    }
}

7. Creating Pipeline in Jenkins Web Dashboard

After preparing application’s source code, Dockerfile and Jenkinsfile the only thing left is to create pipeline using Jenkins UI. We need to select New Item -> Pipeline and type the name of our first Jenkins pipeline. Then go to Configure panel and select Pipeline script from SCM in Pipeline section. Inside the following form we should fill an address of Git repository, user credentials and a location of Jenkinsfile.

art-docker-7

8. Configure GitLab WebHook (Optionally)

If you run GitLab locally using its Docker image you will be able to configure webhook, which triggers run of your pipeline after pushing changes to Git repository. To run GitLab using Docker execute the following command.

$ docker run -d --name gitlab -p 10443:443 -p 10080:80 -p 10022:22
gitlab/gitlab-ce:latest

Before configuring webhook in GitLab Dashboard we need to enable this feature for Jenkins pipeline. To achieve it we should first install GitLab Plugin.

art-docker-8

Then, you should come back to the pipeline’s configuration panel and enable GitLab build trigger. After that, webhook will be available for our sample pipeline called config-service-pipeline under URL http://192.168.99.100:38080/project/config-service-pipeline as shown in the following picture.

art-docker-9

Before proceeding to configuration of webhook in GitLab Dashboard you should retrieve your Jenkins user API token. To achieve it go to profile panel, select Configure and click button Show API Token.

art-docker-10

To add a new WebHook for your Git repository, you need to go to the section Settings -> Integrations and then fill the URL field with webhook address copied from Jenkins pipeline. Then paste Jenkins user API token into field Secret Token. Leave the Push events checkbox selected.

art-docker-11

9. Running pipeline

Now, we may finally run our pipeline. If you use GitLab Docker container as Git repository platform you just have to push changes in the source code. Otherwise you have to manually start build of pipeline. The first build will take a few minutes, because Maven has to download dependencies required for building an application. If everything will end with success you should see the following result on your pipeline dashboard.

art-docker-13

You can check out the list of images stored in your private Docker registry by calling the following HTTP API endpoint in your web browser: https://192.168.99.100:5000/v2/_catalog.

art-docker-12

Advertisements

Mastering Spring Cloud

Let me share with you the result of my last couple months of work – the book published on 26th April by Packt. The book Mastering Spring Cloud is strictly linked to the topics frequently published in this blog – it describes how to build microservices using Spring Cloud framework. I tried to create this book in well-known style of writing from this blog, where I focus on giving you the practical samples of working code without unnecessary small-talk and scribbles 🙂 If you like my style of writing, and in addition you are interested in Spring Cloud framework and microservices, this book is just for you 🙂

The book consists of fifteen chapters, where I have guided you from the basic to the most advanced examples illustrating use cases for almost all projects being a part of Spring Cloud. While creating a blog posts I not always have time to go into all the details related to Spring Cloud. I’m trying to describe a lot of different, interesting trends and solutions in the area of Java development. The book describes many details related to the most important projects of Spring Cloud like service discovery, distributed configuration, inter-service communication, security, logging, testing or continuous delivery. It is available on http://www.packtpub.com site: https://www.packtpub.com/application-development/mastering-spring-cloud. The detailed description of all the topics raised in that book is available on that site.

Personally, I particulary recommend to read the following more advanced subjects described in the book:

  • Peer-to-peer replication between multiple instances of Eureka servers, and using zoning mechanism in inter-service communication
  • Automatically reloading configuration after changes with Spring Cloud Config push notifications mechanism based on Spring Cloud Bus
  • Advanced configuration of inter-service communication with Ribbon client-side load balancer and Feign client
  • Enabling SSL secure communication between microservices and basic elements of microservices-based architecture like service discovery or configuration server
  • Building messaging microservices based on publish/subscribe communication model including cunsumer grouping, partitioning and scaling with Spring Cloud Stream and message brokers (Apache Kafka, RabbitMQ)
  • Setting up continuous delivery for Spring Cloud microservices with Jenkins and Docker
  • Using Docker for running Spring Cloud microservices on Kubernetes platform simulated locally by Minikube
  • Deploying Spring Cloud microservices on cloud platforms like Pivotal Web Services (Pivotal Cloud Foundry hosted cloud solution) and Heroku

Those examples and many others are available together with this book. At the end, a short description taken from packtpub.com site:

Developing, deploying, and operating cloud applications should be as easy as local applications. This should be the governing principle behind any cloud platform, library, or tool. Spring Cloud–an open-source library–makes it easy to develop JVM applications for the cloud. In this book, you will be introduced to Spring Cloud and will master its features from the application developer’s point of view.

Running Vert.x Microservices on Kubernetes/OpenShift

Automatic deployment, scaling, container orchestration, self-healing are a few of very popular topics in some recent months. This is reflected in the rapidly growing popularity of such tools like Docker, Kubernetes or OpenShift. It’s hard to find any developer who didn’t heard about these technologies. How many of you did setup and run all those tools locally?

Despite appearances, it is not very hard thing to do. Both Kubernetes and OpenShift provide simplified, single-node versions of their platform that allows you to create and try a local cluster, even on Windows.

In this article I’m going to guide you through the all steps that result in deploying and running microservices that communicates with each other and use MongoDB as a data source.

Technologies

Eclipse Vert.x – a toolkit for building reactive applications (and more) on the JVM. It’s a polyglot, event-driven, non blocking and fast framework what makes it the perfect choice for creating light-weight, high-performance microservices.

Kubernetes – is an open-source system for automating deployment, scaling, and management of containerized applications. Now, even Docker platform decided to get support for Kubernetes, although they are promoting their own clustering solution – Docker Swarm. You may easily run it locally using Minikube. However, we won’t use it this time. You can read interesting article about creating Spring Boot microservices and running them on Minikube here: Microservices with Kubernetes and Docker.

RedHat OpenShift – is an open source container application platform build on top of Docker containers and Kubernetes. It is also available online on website https://www.openshift.com/. You may easily run it locally with Minishift.

Getting started with Minishift

Of cource, you can read some tutorials available on RedHat website, but I’ll try to condense an instruction of installation and configuration in a few words. Firstly, I would like to point out that all the instructions will be applied to Windows OS.

Minishift requires a hyper-visor to start the virtual machine, so first you should download and install one of these tools. If you use other solution than Hyper-V, like I do, you would have to pass that driver name during Minishift starting. The command visible below launches it on Oracle VirtualBox and allocates 3GB of RAM memory for VM.

$  minishift start --vm-driver=virtualbox --memory=3G

The executable minishift.exe should be included in the system path. You should also have Docker client binary installed on your machine. Docker daemon is in turn managed by Minishift, so you can reuse it for other use-cases as well. All what you need to do to take an advantage of this functionality is to run the following command in your shell.

$ @FOR /f "tokens=* delims=^L" %i IN ('minishift docker-env') DO @call %i

OpenShift platform my be managed using CLI or web console. To enable CLI on Windows you should add it to the path and then run one command to configure your shell. The description of required steps is displayed after running the following command.

$ minishift oc-env
SET PATH=C:\Users\minkowp\.minishift\cache\oc\v3.7.1\windows;%PATH%
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i

In order to use web console just run command $ minishift console, which automatically opens it in your web browser. For me, it is available under address https://192.168.99.100:8443/console. To check your ip just execute $ minishift ip.

Sample applications

The source code of sample applications is available on GitHub (https://github.com/piomin/sample-vertx-kubernetes.git). In fact, the similar application have been ran locally and described in the article Asynchronous Microservices with Vert.x. This article can be treated as an introduction to building microservices with Vert.x framework and to to Vert.x framework in general. The current application is even simpler, because it does not have to integrate with any external discovery server like Consul.

Now, let’s take a look on the code below. It declares a verticle that establishes a client connection to MongoDB and registers repository object as a proxy service. Such a service may be easily accessed by another verticle. MongoDB network address is managed by Minishift.

public class MongoVerticle extends AbstractVerticle {

	@Override
	public void start() throws Exception {
		JsonObject config = new JsonObject();
		config.put("connection_string", "mongodb://micro:micro@mongodb/microdb");
		final MongoClient client = MongoClient.createShared(vertx, config);
		final AccountRepository service = new AccountRepositoryImpl(client);
		ProxyHelper.registerService(AccountRepository.class, vertx, service, "account-service");
	}

}

That verticle can be deployed in the application’s main method. It is also important to set property vertx.disableFileCPResolving to true, if you would like to run your application on Minishift. It forces Vert.x to resolve file from the its classloader in addition from the file system.

public static void main(String[] args) throws Exception {
	System.setProperty("vertx.disableFileCPResolving", "true");
	Vertx vertx = Vertx.vertx();
	vertx.deployVerticle(new MongoVerticle());
	vertx.deployVerticle(new AccountServer());
}

AccountServer verticle contains simple API methods that performs CRUD operations on MongoDB.

Building Docker image

Assuming you have successfully installed and configured Minishift, and cloned my sample Maven project shared on GitHub, you may proceed to the build and deploy stage. The first step is to build the applications from source code by executing mvn clean install command on the root project. It consists of two independent modules: account-vert-service, customer-vertx-service. Each of these modules contains Dockerfile with image definition. Here’s the one created for customer-vertx-service. It is based openjdk:8-jre-alpine image. Alpine Linux is much smaller than most distribution base images, so our result image would have around 100MB, instead around 600MB if using standard OpenJDK image. Because we are generating Fat JAR files during Maven build we only have to run application inside container using java -jar command.

FROM openjdk:8-jre-alpine
ENV VERTICLE_FILE customer-vertx-service-1.0-SNAPSHOT.jar
ENV VERTICLE_HOME /usr/verticles
EXPOSE 8090
COPY target/$VERTICLE_FILE $VERTICLE_HOME/
WORKDIR $VERTICLE_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $VERTICLE_FILE"]

Once we have successfully build the project, we should navigate to the main directory of every module. The sample command visible below builds Docker image of customer-vertx-service.

$ docker build -t microservices/customer-vertx-service:1.0 .

In fact, there are some different approaches of building and deploying microservices on OpenShift. For example, we could use Maven plugin or OpenShift definition file. Currently discussed way of deploying application is obviously one the simplest, and it assumes using CLI and web console for configuring deployments and services.

Deploy application on Minishift

Before proceeding to the main part of that article including deploy and run application on Minishift we have to provide some pre-configuration. We have to begin from logging into OpenShift and creating new project with oc command. Here are two required CLI commands. The name of our first OpenShift project is microservices.

$ oc login -u developer -p developer
$ oc new-project microservices

We might as well perform the same actions using web console. After succesfully login there first you will see a dashboard with all available services brokered by Minishift. Let’s initialize a container with MongoDB. All the provided container settings should the same as configured inside application. After creating MongoDB service would available for all other services under mongodb name.

minishift-1

Creating MongoDB container managed by Minishift is only a part of a success. The most important thing is to deploy containers with two sample microservices, where each of them would have access to the database. Here as well, we may leverage two methods of resources creation: by CLI or via web console. Here are some CLI commands for creating deployment on OpenShift.

$ oc new-app --docker-image microservices/customer-vertx-service:1.0
$ oc new-app --docker-image microservices/account-vertx-service:1.0

The commands visible above create not only deployment, but also creates pods, and exposes each of them as a service. Now yoiu may easily scale number of running pods by executing the following command.

oc scale --replicas=2 dc customer-vertx-service
oc scale --replicas=2 dc account-vertx-service

The next step is to expose your service outside a container to make it publicly visible. We can achieve it by creating a route. OpenShift route is in fact Kubernetes ingress. OpenShift web console provides an interface for creating routes available under section Applications -> Routes. When defining new route you should enter its name, a name of a service, and a path on the basis of which requets are proxied. If hostname is not specified, it is automatically generated by OpenShift.

minishift-2

Now, let’s take a look on web console dashboard. There are three applications deployed: mongodb-persistent, account-vertx-service and customer-vertx-service. Both Vert.x microservices are scaled up with two running instances (Kubernetes pods), and are exposed under automatically generated hostname with given context path, for example http://account-route-microservices.192.168.99.100.nip.io/account.

minishift-3

You may check the details of every deployment by expanding it on the list view.

minishift-4

HTTP API is available outside and can be easily tested. Here’s the source code with REST API implementation for account-vertx-service.

AccountRepository repository = AccountRepository.createProxy(vertx, "account-service");
Router router = Router.router(vertx);
router.route("/account/*").handler(ResponseContentTypeHandler.create());
router.route(HttpMethod.POST, "/account").handler(BodyHandler.create());
router.get("/account/:id").produces("application/json").handler(rc -> {
	repository.findById(rc.request().getParam("id"), res -> {
		Account account = res.result();
		LOGGER.info("Found: {}", account);
		rc.response().end(account.toString());
	});
});
router.get("/account/customer/:customer").produces("application/json").handler(rc -> {
	repository.findByCustomer(rc.request().getParam("customer"), res -> {
		List accounts = res.result();
		LOGGER.info("Found: {}", accounts);
		rc.response().end(Json.encodePrettily(accounts));
	});
});
router.get("/account").produces("application/json").handler(rc -> {
	repository.findAll(res -> {
		List accounts = res.result();
		LOGGER.info("Found all: {}", accounts);
		rc.response().end(Json.encodePrettily(accounts));
	});
});
router.post("/account").produces("application/json").handler(rc -> {
	Account a = Json.decodeValue(rc.getBodyAsString(), Account.class);
	repository.save(a, res -> {
		Account account = res.result();
		LOGGER.info("Created: {}", account);
		rc.response().end(account.toString());
	});
});
router.delete("/account/:id").handler(rc -> {
	repository.remove(rc.request().getParam("id"), res -> {
		LOGGER.info("Removed: {}", rc.request().getParam("id"));
		rc.response().setStatusCode(200);
	});
});
vertx.createHttpServer().requestHandler(router::accept).listen(8095);

Inter-service communication

All the microservices are deployed and exposed outside the container. The last thing that we still have to do is provide a communication between them. In our sample system customer-vertx-service calls endpoint exposed by account-vertx-service. Thanks to Kubernetes services mechanism we may easily call another service from application’s container, for example using simple HTTP client implementation. Let’s take a look on the list of services exposed by Kubernetes.

minishift-6

Here’s client’s implementation responsible for communication with account-vertx-service. Vert.x WebClient takes three parameters when calling GET method: port, hostname and path. We should set a Kubernetes service name as a hostname paramater, and default container’s port as a port.

public class AccountClient {

	private static final Logger LOGGER = LoggerFactory.getLogger(AccountClient.class);

	private Vertx vertx;

	public AccountClient(Vertx vertx) {
		this.vertx = vertx;
	}

	public AccountClient findCustomerAccounts(String customerId, Handler<AsyncResult<List>> resultHandler) {
		WebClient client = WebClient.create(vertx);
		client.get(8095, "account-vertx-service", "/account/customer/" + customerId).send(res2 -> {
			LOGGER.info("Response: {}", res2.result().bodyAsString());
			List accounts = res2.result().bodyAsJsonArray().stream().map(it -> Json.decodeValue(it.toString(), Account.class)).collect(Collectors.toList());
			resultHandler.handle(Future.succeededFuture(accounts));
		});
		return this;
	}

}

AccountClient is invoked inside customer-vertx-service GET /customer/:id endpoint’s implementation.

router.get("/customer/:id").produces("application/json").handler(rc -> {
	repository.findById(rc.request().getParam("id"), res -> {
		Customer customer = res.result();
		LOGGER.info("Found: {}", customer);
		new AccountClient(vertx).findCustomerAccounts(customer.getId(), res2 -> {
			customer.setAccounts(res2.result());
			rc.response().end(customer.toString());
		});
	});
});

Summary

It is no coincidence that OpenShift is considered as the leading enterprise distribution of Kubernetes. It adds several helpful features to Kubernetes that simplify adopting it for developers and operation teams. You can easily try such features like CI/CD for DevOps, multiple projects with collaboration, networking, log aggregation from multiple pods on your local machine with Minishift.

Microservices with Kubernetes and Docker

In one of my previous posts I described an example of continuous delivery configuration for building microservices with Docker and Jenkins. It was a simple configuration where I decided to use only Docker Pipeline Plugin for building and running containers with microservices. That solution had one big disadvantage – we had to link all containers between each other to provide communication between microservices deployed inside those containers. Today I’m going to present you one the smart solution which helps us to avoid that problem – Kubernetes.

Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. It was originally designed by Google. It has many features especially useful for applications running in production like service naming and discovery, load balancing, application health checking, horizontal auto-scaling or rolling updates. There are several important concepts around Kubernetes we should know before going into the sample.

Pod – this is basic unit in Kubernetes. It can consists of one or more containers that are guaranteed to be co-located on the host machine and share the same resources. All containers deployed inside pod can see other containers via localhost. Each pod has a unique IP address within the cluster

Service – is a set of pods that work together. By default a service is exposed inside a cluster but it can also be exposed onto an external  IP address outside your cluster. We can expose it using one of four available behaviors: ClusterIP, NodePort, LoadBalancer and ExternalName.

Replication Controller – it is specific type of Kubernetes controllers. It handles replication and scaling by running a specified number of copies of a pod across the cluster. It is also responsible for pods replacement if the underlying node fails.

Minikube

Configuration of highly available Kubernetes cluster is rather not easy task to perform. Fortunately, there is a tool that makes it easy to run Kubernetes locally – Minikube. It can run a single-node cluster inside a VM, what is really important for developers who want to try it out. The beginning is really easy. For example on Windows, you have to download minikube.exe and kubectl.exe and add them to PATH environment variable. Then you can start it from command line using minikube start command and use almost all of Kubernetes features available by calling kubectl command.  An alternative for command line option is Kubernetes Dashboard. It can be launched by calling minikube dashboard command. We can create, update or delete deployment from UI dashboard, and also list and view a configuration of all pods, services, ingresses, replication controller etc. Here’s Kubernetes Dashboard with the list of deployments for our sample.

kube1

Application

The concept of microservices architecture for our sample is pretty similar to the concept from my article about continuous delivery with Docker and Jenkins which I mentioned in the beginning of that article. We also have account and customer microservices. Customer service is interacting with account service while searching for customer accounts. We do not use gateway (Zuul) and discovery (Eureka) Spring Boot services, because we have such mechanisms available on Kubernetes out of the box. Here’s the picture illustrating the architecture of presented solution. Each microservice’s pod consists of two containers: first with microservice application and second with Mongo database. Account and customer microservices have their own database where all data is stored. Each pod is exposed as a service and can by searched by name on Kubernetes. We also configure Kubernetes Ingress which acts as a gateway for our microservices.

kube_micro

Sample application source code is available on GitHub. It consists of two modules account-service and customer-service. It is based on Spring Boot framework, but doesn’t use any of Spring Cloud projects except Feign client. Here’s dockerfile from account service. We use small openjdk image – alpine. Thanks to that our result image will have about ~120MB instead of ~650MB when using standard openjdk as an base image.

FROM openjdk:alpine
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ADD target/account-service.jar account-service.jar
ENTRYPOINT ["java", "-jar", "/account-service.jar"]
EXPOSE 2222

To enable MongoDB support I add spring-boot-starter-data-mongodb dependency to pom.xml. We also have to provide connection data to application.yml and annotate entity class with @Document. The last think is to declare repository interface extending MongoRepository which has basic CRUD methods implemented. We add two custom find methods.

public interface AccountRepository extends MongoRepository<Account, String> {

    public Account findByNumber(String number);
    public List<Account> findByCustomerId(String customerId);

}

In customer service we are going to call API method from account service. Here’s declarative REST client @FeignClient declaration. All the pods with account service are available under the account-service name and default service port – 2222. Such settings are the results of the service configuration on Kubernetes. I will describe it in the next section.

@FeignClient(name = "account-service", url = "http://account-service:2222")
public interface AccountClient {

	@RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}")
	List<Account> getAccounts(@PathVariable("customerId") String customerId);

}

The docker image of our microservices can be build with the command visible below. After build you should push that image to official docker hub or your private registry. In the next section I’ll describe how to use them on Kubernetes. Docker images of the described microservices are also available on my Docker Hub public repository as piomin/account-service and piomin/customer-service.

docker build -t piomin/account-service .
docker push piomin/account-service

Kubernetes deployment

You can create deployment on Kubernetes using kubectl run command, Minikube dashboard or JSON configuration files with kubectl create command. I’m going to show you how to create all resources from JSON configuration files, because we need to create multi-containers deployments in one step. Here’s deployment configuration file for account-service. We have to provide deployment name, image name and exposed port. In the replicas property we are setting requested number of created pods.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: account-service
  labels:
    run: account-service
spec:
  replicas: 1
  template:
    metadata:
      labels:
        run: account-service
    spec:
      containers:
      - name: account-service
        image: piomin/account-service
        ports:
        - containerPort: 2222
          protocol: TCP
      - name: mongo
        image: library/mongo
        ports:
        - containerPort: 27017
          protocol: TCP

We are creating new deployment by running command below. The same command is used for creating services and ingress. Only JSON file format is different.

kubectl create -f deployment-account.json

Now, let’s take o look on service configuration file. We have already created deployment. As you could see in the dashboard image has been pulled from Docker Hub, pod and replica set has been created. Now, we would like to expose our microservice outside. That’s why service is needed. We are also exposing Mongo database on its default port, to be able to connect database and create collections from MongoDB client.

kind: Service
apiVersion: v1
metadata:
  name: account-service
spec:
  selector:
    run: account-service
  ports:
    - name: port1
protocol: TCP
      port: 2222
      targetPort: 2222
    - name: port2
protocol: TCP
      port: 27017
      targetPort: 27017
  type: NodePort

kube-2

After creating similar configuration for customer service we have our microservices exposed. Inside kubernetes they are visible on default ports (2222 and 3333) and service name. That’s why inside customer service REST client (@FeignClient) we declared URL http://account-service:2222. No matter how many pods have been created service will always be available on that URL and requests are load balanced between all pods be Kubernetes out of the box. If we would like to access each service outside Kubernetes, for example in the web browser we need to call it with port visible below container default port – in that sample for account service it is 31638 port and for customer service 31171 port. If you have ran Minikube on Windows your Kubernetes is probably available under 192.168.99.100 address, so you could try to call account service using URL http://192.168.99.100:31638/accounts. Before such test you need to create collection on Mongo database and user micro/micro which is set for that service inside application.yml.

kube-3

Ok, we have our two microservices available under two different ports. It is not exactly what we need. We need some kind of gateway available under on IP which proxies our requests to exact service by matching request path. Fortunately, such an option is also available on Kubernetes. This solution is Ingress. Here’s JSON ingress configuration file. There are two rules defined, first for account-service and second for customer service. Our gateway is available under micro.all host name and default HTTP port.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gateway-ingress
spec:
  backend:
    serviceName: default-http-backend
    servicePort: 80
  rules:
  - host: micro.all
    http:
      paths:
      - path: /account
        backend:
          serviceName: account-service
          servicePort: 2222
      - path: /customer
        backend:
          serviceName: customer-service
          servicePort: 3333

The last thing that needs to be done to make the gateway working is to add following entry to system hosts file (/etc/hosts for linux and C:\Windows\System32\drivers\etc\hosts for windows). Now, you could try to call from your web browser http://micro.all/accounts or http://micro.all/customers/{id}, which also calls account service in the background.

[MINIKUBE_IP] micro.all

Conclusion

Kubernetes is a great tool for microservices clustering and orchestration. It is still relatively new solution under active development. It can be used together with Spring Boot stack or as an alternative for Spring Cloud Netflix OSS, which seems to be the most popular solution for microservices now.  It has also UI dashboard where you can manage and monitor all resources. Production grade configuration is probably more complicated than single host development configuration with Minikube, but I don’t that it is solid argument against Kubernetes.

Microservices Continuous Delivery with Docker and Jenkins

Docker, Microservices, Continuous Delivery are currently some of the most popular topics in the world of programming. In an environment consisting of dozens of microservices communicating with each other it seems to be particularly important the automation of the testing, building and deployment process. Docker is excellent solution for microservices, because it can create and run isolated containers with service. Today, I’m going to present you how to create basic continuous delivery pipeline for sample microservices using most popular software automation tool – Jenkins.

Sample Microservices

Before I get into the main topic of the article I say a few words about structure and tools used for sample microservices creation. Sample application consists of two sample microservices communicating with each other (account, customer), discovery server (Eureka) and API gateway (Zuul). It was implemented using Spring Boot and Spring Cloud frameworks. Its source code is available on GitHub. Spring Cloud has support for microservices discovery and gateway out of the box – we only have to define right dependencies inside maven project configuration file (pom.xml). The picture illustrating the adopted solution architecture is visible below. Both customer, account REST API services, discovery server and gateway running inside separated docker containers. Gateway is the entry point to the microservices system. It is interacting with all other services. It proxies requests to the selected microservices searching its addresses in discovery service. In case of existing more than one instance of each account or customer microservice the request is load balanced with  Ribbon  and  Feignclient. Account and customer services are registering themselves into the discovery server after startup. There is also a possibility of interaction between them, for example if we would like to find and return all customer’s account details.

Image title

I wouldn’t like to go into the details of those microservices implementation with Spring Boot and Spring Cloud frameworks. If you are interested in detailed description of the sample application development you can read it in my blog post here. Generally, Spring framework has a full support for microservices with all Netflix OSS tools like Ribbon, Hystrix and Eureka. In the blog post I described how to implement service discovery, distributed tracing, load balancing, logging trace ID propagation, API gateway for microservices with those solutions.

Dockerfiles

Each service in the sample source code has  Dockerfilewith docker image build definition. It’s really simple. Here’s Dockerfile for account service. We use openjdk as a base image. Jar file from target is added to the image and then run using java -jar command. Service is running on port 2222 which is exposed outside.

FROM openjdk
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ADD target/account-service.jar account-service.jar
ENTRYPOINT ["java", "-jar", "/account-service.jar"]
EXPOSE 2222

We also had to set main class in the JAR manifest. We achieve it using spring-boot-maven-plugin in module pom.xml. The fragment is visible below. We also set build finalName to cut off version number from target JAR file. Dockerfile and maven build definition is pretty similar for all other microservices.

<build>
  <finalName>account-service</finalName>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
      <version>1.5.2.RELEASE</version>
      <configuration>
        <mainClass>pl.piomin.microservices.account.Application</mainClass>
        <addResources>true</addResources>
      </configuration>
      <executions>
        <execution>
          <goals>
            <goal>repackage</goal>
          </goals>
        </execution>
      </executions>
    </plugin>
  </plugins>
</build>

Jenkins pipelines

We use Pipeline Plugin for building continous delivery for our microservices. In addition to the standard plugins set on Jenkins we also need Docker Pipeline Plugin by CloudBees. There are four pipelines defined as you can see in the picture below.

Image title

Here’s pipeline definition written in Groovy language for discovery service. We have 5 stages of execution. Inside Checkout stage we are pulling changes for remote Git repository of the project. Then project is build with mvn clean install command and also maven version is read from  pom.xml. In Image stage we build docker image from discovery service Dockerfile and then push that image to local registry. In the fourth step we are running built image with default port exposed and hostname visible for linked docker containers. Finally, account pipeline is started with no wait option, which means that source pipeline is finished and won’t wait for account pipeline execution finish.

node {

    withMaven(maven:'maven') {

        stage('Checkout') {
            git url: 'https://github.com/piomin/sample-spring-microservices.git', credentialsId: 'github-piomin', branch: 'master'
        }

        stage('Build') {
            sh 'mvn clean install'

            def pom = readMavenPom file:'pom.xml'
            print pom.version
            env.version = pom.version
        }

        stage('Image') {
            dir ('discovery-service') {
                def app = docker.build "localhost:5000/discovery-service:${env.version}"
                app.push()
            }
        }

        stage ('Run') {
            docker.image("localhost:5000/discovery-service:${env.version}").run('-p 8761:8761 -h discovery --name discovery')
        }

        stage ('Final') {
            build job: 'account-service-pipeline', wait: false
        }      

    }

}

Account pipeline is very similar. The main difference is inside fourth stage where account service container is linked to discovery container. We need to linked that containers, because account-service is registering itself in discovery server and must be able to connect it using hostname.

node {

    withMaven(maven:'maven') {

        stage('Checkout') {
            git url: 'https://github.com/piomin/sample-spring-microservices.git', credentialsId: 'github-piomin', branch: 'master'
        }

        stage('Build') {
            sh 'mvn clean install'

            def pom = readMavenPom file:'pom.xml'
            print pom.version
            env.version = pom.version
        }

        stage('Image') {
            dir ('account-service') {
                def app = docker.build "localhost:5000/account-service:${env.version}"
                app.push()
            }
        }

        stage ('Run') {
            docker.image("localhost:5000/account-service:${env.version}").run('-p 2222:2222 -h account --name account --link discovery')
        }

        stage ('Final') {
            build job: 'customer-service-pipeline', wait: false
        }      

    }

}

Similar pipelines are also defined for customer and gateway service. They are available in main project catalog on each microservice as  Jenkinsfile. Every image which is built during pipeline execution is also pushed to local Docker registry. To enable local registry on our host we need to pull and run Docker registry image and also use that registry address as an image name prefix while pulling or pushing. Local registry is exposed on its default 5000 port. You can see the list of pushed images to local registry by calling its REST API, for example http://localhost:5000/v2/_catalog.

docker run -d --name registry -p 5000:5000 registry

Testing

You should launch the build on discovery-service-pipeline. This pipeline will not only run build for discovery service but also call start next pipeline build (account-service-pipeline) at the end.The same rule is configured for account-service-pipeline which calls customer-service-pipeline and for customer-service-pipeline which call gateway-service-pipeline. So, after all pipelines finish you can check the list of running docker containers by calling  docker ps  command. You should have seen 5 containers: local registry and our four microservices. You can also check the logs of each container by running command  docker logs, for example  docker logs account. If everything works fine you should be able te call some service like http://localhost:2222/accounts or via Zuul gateway http://localhost:8765/account/account.

</div>
<div class="cm-replace _replace_51">CONTAINER ID        IMAGE                                           COMMAND                  CREATED             STATUS              PORTS                    NAMES
fa3b9e408bb4        localhost:5000/gateway-service:1.0-SNAPSHOT     "java -jar /gatewa..."   About an hour ago   Up About an hour    0.0.0.0:8765->8765/tcp   gateway
cc9e2b44fe44        localhost:5000/customer-service:1.0-SNAPSHOT    "java -jar /custom..."   About an hour ago   Up About an hour    0.0.0.0:3333->3333/tcp   customer
49657f4531de        localhost:5000/account-service:1.0-SNAPSHOT     "java -jar /accoun..."   About an hour ago   Up About an hour    0.0.0.0:2222->2222/tcp   account
fe07b8dfe96c        localhost:5000/discovery-service:1.0-SNAPSHOT   "java -jar /discov..."   About an hour ago   Up About an hour    0.0.0.0:8761->8761/tcp   discovery
f9a7691ddbba        registry</div>
<div class="cm-replace _replace_51">

Conclusion

I have presented the basic sample of Continuous Delivery environment for microservices using Docker and Jenkins. You can easily find out the limitations of presented solution, for example we has to linked docker containers with each other to enable communication between them or all of the tools and microservices are running on the same machine. For more advanced sample we could use Jenkins slaves running on different machines or docker containers (more here), tools like Kubernetes for orchestration and clustering, maybe Docker-in-Docker containers for simulating multiple docker machines. I hope that article is a fine introduction to the microservices Continuous Delivery and helps you to understand the basics of this idea. I think that you could expect more my advanced articles about that subject near the future.

Jenkins nodes on Docker containers

Jenkins is most popular an open source automation server written in Java. It has many interesting plugins and features. Today, I’m going to show you one of them – how to set up Jenkins master server with one slave instance connected to master. So that we will be able to run distributed builds using few docker containers. For that sample we use docker images of Jenkins (jenkins) and Jenkins slave (jenkinsci/jnlp-slave). Let’s start from running Jenkins docker container.

docker run -d --name jenkins -p 50000:50000 -p 50080:8080 jenkins

Go to management console (http://192.168.99.100:50080) and select Manage Jenkins -> Manage Nodes and then click New Node. In the next page you have to put the slave name – for that sample is slave-1. After clicking OK you will see new node on the list. Now, you can configure it by clicking setting button and display node details by clicking node name on the list.

jenkins-slave

New node is created by is still disabled. After clicking node you will see the page with details. The important information is in command secret line property. Copy that token.

jenkins-slave1

Now, we are going to run docker image with JNLP agent. In the docker run command we paste Jenkins master URL, secret token and chosen node name (slave-1). If you would like to set up it without docker container you should download slave agent JAR file by clicking Launch button and run agent from command line like in the picture above.

docker run -d --name jenkins-slave1 jenkinsci/jnlp-slave -url http://192.168.99.100:50080 5d681c12e9c68f14373d62375e852d0874ea9daeca3483df4c858ad3556d406d slave-1

After running slave container you should see name slave-1 in the Build Executor Status below master node.

jenkins-slave2

Now, we could configure sample Jenkins pipeline to test our new slave. Pipeline builds could be ran on master node or on slave node. Here sample pipeline fragment. For trying that sample you need to have Pipeline Plugin installed on your Jenkins server.

node() {
    stage('Checkout') {
        ...
    }

    stage('Build') {
        ...
    }
}

You can select the node for running your pipeline by providing node name. Now, build always run on slave-1 node.

node('slave-1') {
    stage('Checkout') {
        ...
    }

    stage('Build') {
        ...
    }
}

Apache Karaf Microservices

Apache Karaf is a small OSGi based runtime which provides a lightweight container onto which various components and applications can be deployed.

Apache Karaf can be runned as standalone container and provides some enterprise ready features like shell console, remote access, hot deployment, dynamic configuration. It can be the perfect solution for microservices. The idea of microservices on Apache Karaf has already been introduced a few years ago. “What I am promoting is the idea of µServices, the concepts of an OSGi service as a design primitive.” – Peter Kriens March 2010.

Karaf on Docker

First, we need to run docker container with Apache Karaf. Surprisingly, there is no official repository with such an image. I found image on Docker Hub with Karaf here. Unfortunately, there is no port 8181 exposed – default Karaf web port. We will use this image to create our own with 8181 port available outside. Here’s our Dockerfile.

FROM java:8-jdk
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64

ENV KARAF_VERSION=4.0.8

RUN wget http://www-us.apache.org/dist/karaf/${KARAF_VERSION}/apache-karaf-${KARAF_VERSION}.tar.gz; \
    mkdir /opt/karaf; \
    tar --strip-components=1 -C /opt/karaf -xzf apache-karaf-${KARAF_VERSION}.tar.gz; \
    rm apache-karaf-${KARAF_VERSION}.tar.gz; \
    mkdir /deploy; \
    sed -i 's/^\(felix\.fileinstall\.dir\s*=\s*\).*$/\1\/deploy/' /opt/karaf/etc/org.apache.felix.fileinstall-deploy.cfg

VOLUME ["/deploy"]
EXPOSE 1099 8101 8181 44444
ENTRYPOINT ["/opt/karaf/bin/karaf"]

Then, by running docker commands below we are building our image from Dockerfile and starting new Karaf container.

docker build -t karaf-api .
docker run -d --name karaf -p 1099:1099 -p 8101:8101 -p 8181:8181 -p 44444:44444 karaf-api

Now, we can login to new docker container (1). Karaf is installed in /opt/karaf directory. We should run client by calling ./client in /opt/karaf/bin directory (2). Then we should install Apache Felix web console which is by default available under port 8181 (3). You can check it out by calling on web browser http://192.168.99.100:8181/system/console. Default username and password is karaf. In webconsole you can check full list of features installed on our OSGi cantainer. You can also display that list in karaf console using feature:list command (4). After webconsole installation you decide if you prefer using Karaf command line or Apache Felix console for further actions. For our sample application we need to add some OSGi repositories and features. First, we are adding Apache CXF framework repository (5) and its features for http and RESTful web services (6). Then we are adding repository for jackson framework (7) and some jackson and Jetty server features (8).

docker exec -i -t karaf /bin/bash (1)
cd /opt/karaf/bin
./client (2)
karaf@root()> feature:install webconsole (3)
karaf@root()> feature:list (4)
karaf@root()> feature:repo-add cxf 3.1.10 (5)
karaf@root()> feature:install http cxf-jaxrs cxf (6)
karaf@root()> feature:repo-add mvn:org.code-house.jackson/features/2.7.6/xml/features (7)
karaf@root()> feature:install jackson-jaxrs-json-provider jetty (8)

Microservices

Our environment has been configured. Now, we can take a brief look on sample application. It’s really simple. It has only three modules account-cxf, customer-cxf, sample-api. In the sample-api module we have base service interfaces and model objects. In account-cxf and customer-cxf there service implementations and OSGi services declarations in Blueprint file. Sample application source code is available on GitHub. Here’s account service controller class and its interface below.

public class AccountServiceImpl implements AccountService {

	private List<Account> accounts;

	public AccountServiceImpl() {
		accounts = new ArrayList<>();
		accounts.add(new Account(1, "1234567890", 12345, 1));
		accounts.add(new Account(2, "1234567891", 6543, 2));
		accounts.add(new Account(3, "1234567892", 45646, 3));
	}

	public Account findById(Integer id) {
		return accounts.stream().filter(a -> a.getId().equals(id)).findFirst().get();
	}

	public List<Account> findAll() {
		return accounts;
	}

	public Account add(Account account) {
		accounts.add(account);
		account.setId(accounts.size());
		return account;
	}

	@Override
	public List<Account> findAllByCustomerId(Integer customerId) {
		return accounts.stream().filter(a -> a.getCustomerId().equals(customerId)).collect(Collectors.toList());
	}

}

AccountService interface is in sample-api module. We use JAX-RS annotations for declaring REST endpoints.

public interface AccountService {

	@GET
	@Path("/{id}")
	@Produces("application/json")
	public Account findById(@PathParam("id") Integer id);

	@GET
	@Path("/")
	@Produces("application/json")
	public List<Account> findAll();

	@GET
	@Path("/customer/{customerId}")
	@Produces("application/json")
	public List<Account> findAllByCustomerId(@PathParam("customerId") Integer customerId);

	@POST
	@Path("/")
	@Consumes("application/json")
	@Produces("application/json")
	public Account add(Account account);

}

Here you can see OSGi services declaration in the blueprint.xml file. We have declared AccountServiceIpl bean and set that bean as a service for JAX-RS endpoint. Endpoint uses JacksonJsonProvider as data format provider. There is also important OSGi service declaration with AccountService referencing to AccountServiceImpl. This service will be available for other microservices deployed on Karaf container for example, customer-cxf.

    <cxf:bus id="accountRestBus">
    </cxf:bus>

    <bean id="accountServiceImpl" class="pl.piomin.services.cxf.account.service.AccountServiceImpl"/>
    <service ref="accountServiceImpl" interface="pl.piomin.services.cxf.api.AccountService" />

    <jaxrs:server address="/account" id="accountService">
        <jaxrs:serviceBeans>
            <ref component-id="accountServiceImpl" />
        </jaxrs:serviceBeans>
        <jaxrs:features>
            <cxf:logging />
        </jaxrs:features>
        <jaxrs:providers>
        	<bean class="com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider"/>
        </jaxrs:providers>
    </jaxrs:server>

Now, let’s take a look on customer-cxf microservice. Here’s OSGi blueprint of that service. JAX-RS server declaration is pretty similar as for account-cxf. There is only one addition in comparision with earlier presented OSGi blueprint – reference to AccountService. This reference is injected into CustomerServiceImpl.

	<reference id="accountService" 		interface="pl.piomin.services.cxf.api.AccountService" />

	<bean id="customerServiceImpl" 		class="pl.piomin.services.cxf.customer.service.CustomerServiceImpl">
		<property name="accountService" ref="accountService" />
	</bean>

	<jaxrs:server address="/customer" id="customerService">
		<jaxrs:serviceBeans>
			<ref component-id="customerServiceImpl" />
		</jaxrs:serviceBeans>
		<jaxrs:features>
			<cxf:logging />
		</jaxrs:features>
		<jaxrs:providers>
			<bean class="com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider" />
		</jaxrs:providers>
	</jaxrs:server>

CustomerService uses OSGi reference to AccountService in findById method to collect all accounts belonged to the customer with the specified id path parameter and also exposes some other operations.

public class CustomerServiceImpl implements CustomerService {

	private AccountService accountService;

	private List<Customer> customers;

	public CustomerServiceImpl() {
		customers = new ArrayList<>();
		customers.add(new Customer(1, "XXX", "1234567890"));
		customers.add(new Customer(2, "YYY", "1234567891"));
		customers.add(new Customer(3, "ZZZ", "1234567892"));
	}

	@Override
	public Customer findById(Integer id) {
		Customer c = customers.stream().filter(a -> a.getId().equals(id)).findFirst().get();
		c.setAccounts(accountService.findAllByCustomerId(id));
		return c;
	}

	@Override
	public List<Customer> findAll() {
		return customers;
	}

	@Override
	public Customer add(Customer customer) {
		customers.add(customer);
		customer.setId(customers.size());
		return customer;
	}

	public AccountService getAccountService() {
		return accountService;
	}

	public void setAccountService(AccountService accountService) {
		this.accountService = accountService;
	}

}

Each service has packaging type bundle inside pom.xml and uses maven-bundle-plugin during build process. After running mvn clean install on the root project all bundles will be generated in target catalog.You can install them using Apache Felix web console or Karaf command line client in that order: sample-api, account-cxf, customer-cxf.

Testing

Finally, you can see a list of available CXF endpoints on Karaf by calling http://192.168.99.100:8181/cxf in your web browser. Call http://192.168.99.100:8181/cxf/customer/1 to test findById in CustomerService. You should see JSON with customer data and all accounts collected from account microservice.

Conclusion

Treat this post as a short introduction to microsevices conception on Apache Karaf OSGi container. I presented you how to use CXF endpoints on Karaf container as a some kind of service gateway and OSGi services for inter-communication process between deployed microservices. Instead of OSGi reference we could use JAX-RS proxy client for connecting with account service from customer service. You can find some basic examples of that concept on the web. There are also available more advanced solutions for service registration and discovery on Karaf, for example remore service call with Apahce ZooKeeper. I think we will take a closer look on them in subsequent posts.