Microservices with Kubernetes and Docker

In one of my previous posts I described an example of continuous delivery configuration for building microservices with Docker and Jenkins. It was a simple configuration where I decided to use only Docker Pipeline Plugin for building and running containers with microservices. That solution had one big disadvantage – we had to link all containers between each other to provide communication between microservices deployed inside those containers. Today I’m going to present you one the smart solution which helps us to avoid that problem – Kubernetes.

Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. It was originally designed by Google. It has many features especially useful for applications running in production like service naming and discovery, load balancing, application health checking, horizontal auto-scaling or rolling updates. There are several important concepts around Kubernetes we should know before going into the sample.

Pod – this is basic unit in Kubernetes. It can consists of one or more containers that are guaranteed to be co-located on the host machine and share the same resources. All containers deployed inside pod can see other containers via localhost. Each pod has a unique IP address within the cluster

Service – is a set of pods that work together. By default a service is exposed inside a cluster but it can also be exposed onto an external  IP address outside your cluster. We can expose it using one of four available behaviors: ClusterIP, NodePort, LoadBalancer and ExternalName.

Replication Controller – it is specific type of Kubernetes controllers. It handles replication and scaling by running a specified number of copies of a pod across the cluster. It is also responsible for pods replacement if the underlying node fails.

Minikube

Configuration of highly available Kubernetes cluster is rather not easy task to perform. Fortunately, there is a tool that makes it easy to run Kubernetes locally – Minikube. It can run a single-node cluster inside a VM, what is really important for developers who want to try it out. The beginning is really easy. For example on Windows, you have to download minikube.exe and kubectl.exe and add them to PATH environment variable. Then you can start it from command line using minikube start command and use almost all of Kubernetes features available by calling kubectl command.  An alternative for command line option is Kubernetes Dashboard. It can be launched by calling minikube dashboard command. We can create, update or delete deployment from UI dashboard, and also list and view a configuration of all pods, services, ingresses, replication controller etc. Here’s Kubernetes Dashboard with the list of deployments for our sample.

kube1

Application

The concept of microservices architecture for our sample is pretty similar to the concept from my article about continuous delivery with Docker and Jenkins which I mentioned in the beginning of that article. We also have account and customer microservices. Customer service is interacting with account service while searching for customer accounts. We do not use gateway (Zuul) and discovery (Eureka) Spring Boot services, because we have such mechanisms available on Kubernetes out of the box. Here’s the picture illustrating the architecture of presented solution. Each microservice’s pod consists of two containers: first with microservice application and second with Mongo database. Account and customer microservices have their own database where all data is stored. Each pod is exposed as a service and can by searched by name on Kubernetes. We also configure Kubernetes Ingress which acts as a gateway for our microservices.

kube_micro

Sample application source code is available on GitHub. It consists of two modules account-service and customer-service. It is based on Spring Boot framework, but doesn’t use any of Spring Cloud projects except Feign client. Here’s dockerfile from account service. We use small openjdk image – alpine. Thanks to that our result image will have about ~120MB instead of ~650MB when using standard openjdk as an base image.

FROM openjdk:alpine
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ADD target/account-service.jar account-service.jar
ENTRYPOINT ["java", "-jar", "/account-service.jar"]
EXPOSE 2222

To enable MongoDB support I add spring-boot-starter-data-mongodb dependency to pom.xml. We also have to provide connection data to application.yml and annotate entity class with @Document. The last think is to declare repository interface extending MongoRepository which has basic CRUD methods implemented. We add two custom find methods.

public interface AccountRepository extends MongoRepository<Account, String> {

    public Account findByNumber(String number);
    public List<Account> findByCustomerId(String customerId);

}

In customer service we are going to call API method from account service. Here’s declarative REST client @FeignClient declaration. All the pods with account service are available under the account-service name and default service port – 2222. Such settings are the results of the service configuration on Kubernetes. I will describe it in the next section.

@FeignClient(name = "account-service", url = "http://account-service:2222")
public interface AccountClient {

	@RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}")
	List<Account> getAccounts(@PathVariable("customerId") String customerId);

}

The docker image of our microservices can be build with the command visible below. After build you should push that image to official docker hub or your private registry. In the next section I’ll describe how to use them on Kubernetes. Docker images of the described microservices are also available on my Docker Hub public repository as piomin/account-service and piomin/customer-service.

docker build -t piomin/account-service .
docker push piomin/account-service

Kubernetes deployment

You can create deployment on Kubernetes using kubectl run command, Minikube dashboard or JSON configuration files with kubectl create command. I’m going to show you how to create all resources from JSON configuration files, because we need to create multi-containers deployments in one step. Here’s deployment configuration file for account-service. We have to provide deployment name, image name and exposed port. In the replicas property we are setting requested number of created pods.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: account-service
  labels:
    run: account-service
spec:
  replicas: 1
  template:
    metadata:
      labels:
        run: account-service
    spec:
      containers:
      - name: account-service
        image: piomin/account-service
        ports:
        - containerPort: 2222
          protocol: TCP
      - name: mongo
        image: library/mongo
        ports:
        - containerPort: 27017
          protocol: TCP

We are creating new deployment by running command below. The same command is used for creating services and ingress. Only JSON file format is different.

kubectl create -f deployment-account.json

Now, let’s take o look on service configuration file. We have already created deployment. As you could see in the dashboard image has been pulled from Docker Hub, pod and replica set has been created. Now, we would like to expose our microservice outside. That’s why service is needed. We are also exposing Mongo database on its default port, to be able to connect database and create collections from MongoDB client.

kind: Service
apiVersion: v1
metadata:
  name: account-service
spec:
  selector:
    run: account-service
  ports:
    - name: port1
protocol: TCP
      port: 2222
      targetPort: 2222
    - name: port2
protocol: TCP
      port: 27017
      targetPort: 27017
  type: NodePort

kube-2

After creating similar configuration for customer service we have our microservices exposed. Inside kubernetes they are visible on default ports (2222 and 3333) and service name. That’s why inside customer service REST client (@FeignClient) we declared URL http://account-service:2222. No matter how many pods have been created service will always be available on that URL and requests are load balanced between all pods be Kubernetes out of the box. If we would like to access each service outside Kubernetes, for example in the web browser we need to call it with port visible below container default port – in that sample for account service it is 31638 port and for customer service 31171 port. If you have ran Minikube on Windows your Kubernetes is probably available under 192.168.99.100 address, so you could try to call account service using URL http://192.168.99.100:31638/accounts. Before such test you need to create collection on Mongo database and user micro/micro which is set for that service inside application.yml.

kube-3

Ok, we have our two microservices available under two different ports. It is not exactly what we need. We need some kind of gateway available under on IP which proxies our requests to exact service by matching request path. Fortunately, such an option is also available on Kubernetes. This solution is Ingress. Here’s JSON ingress configuration file. There are two rules defined, first for account-service and second for customer service. Our gateway is available under micro.all host name and default HTTP port.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gateway-ingress
spec:
  backend:
    serviceName: default-http-backend
    servicePort: 80
  rules:
  - host: micro.all
    http:
      paths:
      - path: /account
        backend:
          serviceName: account-service
          servicePort: 2222
      - path: /customer
        backend:
          serviceName: customer-service
          servicePort: 3333

The last thing that needs to be done to make the gateway working is to add following entry to system hosts file (/etc/hosts for linux and C:\Windows\System32\drivers\etc\hosts for windows). Now, you could try to call from your web browser http://micro.all/accounts or http://micro.all/customers/{id}, which also calls account service in the background.

[MINIKUBE_IP] micro.all

Conclusion

Kubernetes is a great tool for microservices clustering and orchestration. It is still relatively new solution under active development. It can be used together with Spring Boot stack or as an alternative for Spring Cloud Netflix OSS, which seems to be the most popular solution for microservices now.  It has also UI dashboard where you can manage and monitor all resources. Production grade configuration is probably more complicated than single host development configuration with Minikube, but I don’t that it is solid argument against Kubernetes.

Advertisements

Microservices Continuous Delivery with Docker and Jenkins

Docker, Microservices, Continuous Delivery are currently some of the most popular topics in the world of programming. In an environment consisting of dozens of microservices communicating with each other it seems to be particularly important the automation of the testing, building and deployment process. Docker is excellent solution for microservices, because it can create and run isolated containers with service. Today, I’m going to present you how to create basic continuous delivery pipeline for sample microservices using most popular software automation tool – Jenkins.

Sample Microservices

Before I get into the main topic of the article I say a few words about structure and tools used for sample microservices creation. Sample application consists of two sample microservices communicating with each other (account, customer), discovery server (Eureka) and API gateway (Zuul). It was implemented using Spring Boot and Spring Cloud frameworks. Its source code is available on GitHub. Spring Cloud has support for microservices discovery and gateway out of the box – we only have to define right dependencies inside maven project configuration file (pom.xml). The picture illustrating the adopted solution architecture is visible below. Both customer, account REST API services, discovery server and gateway running inside separated docker containers. Gateway is the entry point to the microservices system. It is interacting with all other services. It proxies requests to the selected microservices searching its addresses in discovery service. In case of existing more than one instance of each account or customer microservice the request is load balanced with  Ribbon  and  Feignclient. Account and customer services are registering themselves into the discovery server after startup. There is also a possibility of interaction between them, for example if we would like to find and return all customer’s account details.

Image title

I wouldn’t like to go into the details of those microservices implementation with Spring Boot and Spring Cloud frameworks. If you are interested in detailed description of the sample application development you can read it in my blog post here. Generally, Spring framework has a full support for microservices with all Netflix OSS tools like Ribbon, Hystrix and Eureka. In the blog post I described how to implement service discovery, distributed tracing, load balancing, logging trace ID propagation, API gateway for microservices with those solutions.

Dockerfiles

Each service in the sample source code has  Dockerfilewith docker image build definition. It’s really simple. Here’s Dockerfile for account service. We use openjdk as a base image. Jar file from target is added to the image and then run using java -jar command. Service is running on port 2222 which is exposed outside.

FROM openjdk
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ADD target/account-service.jar account-service.jar
ENTRYPOINT ["java", "-jar", "/account-service.jar"]
EXPOSE 2222

We also had to set main class in the JAR manifest. We achieve it using spring-boot-maven-plugin in module pom.xml. The fragment is visible below. We also set build finalName to cut off version number from target JAR file. Dockerfile and maven build definition is pretty similar for all other microservices.

<build>
  <finalName>account-service</finalName>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
      <version>1.5.2.RELEASE</version>
      <configuration>
        <mainClass>pl.piomin.microservices.account.Application</mainClass>
        <addResources>true</addResources>
      </configuration>
      <executions>
        <execution>
          <goals>
            <goal>repackage</goal>
          </goals>
        </execution>
      </executions>
    </plugin>
  </plugins>
</build>

Jenkins pipelines

We use Pipeline Plugin for building continous delivery for our microservices. In addition to the standard plugins set on Jenkins we also need Docker Pipeline Plugin by CloudBees. There are four pipelines defined as you can see in the picture below.

Image title

Here’s pipeline definition written in Groovy language for discovery service. We have 5 stages of execution. Inside Checkout stage we are pulling changes for remote Git repository of the project. Then project is build with mvn clean install command and also maven version is read from  pom.xml. In Image stage we build docker image from discovery service Dockerfile and then push that image to local registry. In the fourth step we are running built image with default port exposed and hostname visible for linked docker containers. Finally, account pipeline is started with no wait option, which means that source pipeline is finished and won’t wait for account pipeline execution finish.

node {

    withMaven(maven:'maven') {

        stage('Checkout') {
            git url: 'https://github.com/piomin/sample-spring-microservices.git', credentialsId: 'github-piomin', branch: 'master'
        }

        stage('Build') {
            sh 'mvn clean install'

            def pom = readMavenPom file:'pom.xml'
            print pom.version
            env.version = pom.version
        }

        stage('Image') {
            dir ('discovery-service') {
                def app = docker.build "localhost:5000/discovery-service:${env.version}"
                app.push()
            }
        }

        stage ('Run') {
            docker.image("localhost:5000/discovery-service:${env.version}").run('-p 8761:8761 -h discovery --name discovery')
        }

        stage ('Final') {
            build job: 'account-service-pipeline', wait: false
        }      

    }

}

Account pipeline is very similar. The main difference is inside fourth stage where account service container is linked to discovery container. We need to linked that containers, because account-service is registering itself in discovery server and must be able to connect it using hostname.

node {

    withMaven(maven:'maven') {

        stage('Checkout') {
            git url: 'https://github.com/piomin/sample-spring-microservices.git', credentialsId: 'github-piomin', branch: 'master'
        }

        stage('Build') {
            sh 'mvn clean install'

            def pom = readMavenPom file:'pom.xml'
            print pom.version
            env.version = pom.version
        }

        stage('Image') {
            dir ('account-service') {
                def app = docker.build "localhost:5000/account-service:${env.version}"
                app.push()
            }
        }

        stage ('Run') {
            docker.image("localhost:5000/account-service:${env.version}").run('-p 2222:2222 -h account --name account --link discovery')
        }

        stage ('Final') {
            build job: 'customer-service-pipeline', wait: false
        }      

    }

}

Similar pipelines are also defined for customer and gateway service. They are available in main project catalog on each microservice as  Jenkinsfile. Every image which is built during pipeline execution is also pushed to local Docker registry. To enable local registry on our host we need to pull and run Docker registry image and also use that registry address as an image name prefix while pulling or pushing. Local registry is exposed on its default 5000 port. You can see the list of pushed images to local registry by calling its REST API, for example http://localhost:5000/v2/_catalog.

docker run -d --name registry -p 5000:5000 registry

Testing

You should launch the build on discovery-service-pipeline. This pipeline will not only run build for discovery service but also call start next pipeline build (account-service-pipeline) at the end.The same rule is configured for account-service-pipeline which calls customer-service-pipeline and for customer-service-pipeline which call gateway-service-pipeline. So, after all pipelines finish you can check the list of running docker containers by calling  docker ps  command. You should have seen 5 containers: local registry and our four microservices. You can also check the logs of each container by running command  docker logs, for example  docker logs account. If everything works fine you should be able te call some service like http://localhost:2222/accounts or via Zuul gateway http://localhost:8765/account/account.

</div>
<div class="cm-replace _replace_51">CONTAINER ID        IMAGE                                           COMMAND                  CREATED             STATUS              PORTS                    NAMES
fa3b9e408bb4        localhost:5000/gateway-service:1.0-SNAPSHOT     "java -jar /gatewa..."   About an hour ago   Up About an hour    0.0.0.0:8765->8765/tcp   gateway
cc9e2b44fe44        localhost:5000/customer-service:1.0-SNAPSHOT    "java -jar /custom..."   About an hour ago   Up About an hour    0.0.0.0:3333->3333/tcp   customer
49657f4531de        localhost:5000/account-service:1.0-SNAPSHOT     "java -jar /accoun..."   About an hour ago   Up About an hour    0.0.0.0:2222->2222/tcp   account
fe07b8dfe96c        localhost:5000/discovery-service:1.0-SNAPSHOT   "java -jar /discov..."   About an hour ago   Up About an hour    0.0.0.0:8761->8761/tcp   discovery
f9a7691ddbba        registry</div>
<div class="cm-replace _replace_51">

Conclusion

I have presented the basic sample of Continuous Delivery environment for microservices using Docker and Jenkins. You can easily find out the limitations of presented solution, for example we has to linked docker containers with each other to enable communication between them or all of the tools and microservices are running on the same machine. For more advanced sample we could use Jenkins slaves running on different machines or docker containers (more here), tools like Kubernetes for orchestration and clustering, maybe Docker-in-Docker containers for simulating multiple docker machines. I hope that article is a fine introduction to the microservices Continuous Delivery and helps you to understand the basics of this idea. I think that you could expect more my advanced articles about that subject near the future.

Jenkins nodes on Docker containers

Jenkins is most popular an open source automation server written in Java. It has many interesting plugins and features. Today, I’m going to show you one of them – how to set up Jenkins master server with one slave instance connected to master. So that we will be able to run distributed builds using few docker containers. For that sample we use docker images of Jenkins (jenkins) and Jenkins slave (jenkinsci/jnlp-slave). Let’s start from running Jenkins docker container.

docker run -d --name jenkins -p 50000:50000 -p 50080:8080 jenkins

Go to management console (http://192.168.99.100:50080) and select Manage Jenkins -> Manage Nodes and then click New Node. In the next page you have to put the slave name – for that sample is slave-1. After clicking OK you will see new node on the list. Now, you can configure it by clicking setting button and display node details by clicking node name on the list.

jenkins-slave

New node is created by is still disabled. After clicking node you will see the page with details. The important information is in command secret line property. Copy that token.

jenkins-slave1

Now, we are going to run docker image with JNLP agent. In the docker run command we paste Jenkins master URL, secret token and chosen node name (slave-1). If you would like to set up it without docker container you should download slave agent JAR file by clicking Launch button and run agent from command line like in the picture above.

docker run -d --name jenkins-slave1 jenkinsci/jnlp-slave -url http://192.168.99.100:50080 5d681c12e9c68f14373d62375e852d0874ea9daeca3483df4c858ad3556d406d slave-1

After running slave container you should see name slave-1 in the Build Executor Status below master node.

jenkins-slave2

Now, we could configure sample Jenkins pipeline to test our new slave. Pipeline builds could be ran on master node or on slave node. Here sample pipeline fragment. For trying that sample you need to have Pipeline Plugin installed on your Jenkins server.

node() {
    stage('Checkout') {
        ...
    }

    stage('Build') {
        ...
    }
}

You can select the node for running your pipeline by providing node name. Now, build always run on slave-1 node.

node('slave-1') {
    stage('Checkout') {
        ...
    }

    stage('Build') {
        ...
    }
}

Apache Karaf Microservices

Apache Karaf is a small OSGi based runtime which provides a lightweight container onto which various components and applications can be deployed.

Apache Karaf can be runned as standalone container and provides some enterprise ready features like shell console, remote access, hot deployment, dynamic configuration. It can be the perfect solution for microservices. The idea of microservices on Apache Karaf has already been introduced a few years ago. “What I am promoting is the idea of µServices, the concepts of an OSGi service as a design primitive.” – Peter Kriens March 2010.

Karaf on Docker

First, we need to run docker container with Apache Karaf. Surprisingly, there is no official repository with such an image. I found image on Docker Hub with Karaf here. Unfortunately, there is no port 8181 exposed – default Karaf web port. We will use this image to create our own with 8181 port available outside. Here’s our Dockerfile.

FROM java:8-jdk
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64

ENV KARAF_VERSION=4.0.8

RUN wget http://www-us.apache.org/dist/karaf/${KARAF_VERSION}/apache-karaf-${KARAF_VERSION}.tar.gz; \
    mkdir /opt/karaf; \
    tar --strip-components=1 -C /opt/karaf -xzf apache-karaf-${KARAF_VERSION}.tar.gz; \
    rm apache-karaf-${KARAF_VERSION}.tar.gz; \
    mkdir /deploy; \
    sed -i 's/^\(felix\.fileinstall\.dir\s*=\s*\).*$/\1\/deploy/' /opt/karaf/etc/org.apache.felix.fileinstall-deploy.cfg

VOLUME ["/deploy"]
EXPOSE 1099 8101 8181 44444
ENTRYPOINT ["/opt/karaf/bin/karaf"]

Then, by running docker commands below we are building our image from Dockerfile and starting new Karaf container.

docker build -t karaf-api .
docker run -d --name karaf -p 1099:1099 -p 8101:8101 -p 8181:8181 -p 44444:44444 karaf-api

Now, we can login to new docker container (1). Karaf is installed in /opt/karaf directory. We should run client by calling ./client in /opt/karaf/bin directory (2). Then we should install Apache Felix web console which is by default available under port 8181 (3). You can check it out by calling on web browser http://192.168.99.100:8181/system/console. Default username and password is karaf. In webconsole you can check full list of features installed on our OSGi cantainer. You can also display that list in karaf console using feature:list command (4). After webconsole installation you decide if you prefer using Karaf command line or Apache Felix console for further actions. For our sample application we need to add some OSGi repositories and features. First, we are adding Apache CXF framework repository (5) and its features for http and RESTful web services (6). Then we are adding repository for jackson framework (7) and some jackson and Jetty server features (8).

docker exec -i -t karaf /bin/bash (1)
cd /opt/karaf/bin
./client (2)
karaf@root()> feature:install webconsole (3)
karaf@root()> feature:list (4)
karaf@root()> feature:repo-add cxf 3.1.10 (5)
karaf@root()> feature:install http cxf-jaxrs cxf (6)
karaf@root()> feature:repo-add mvn:org.code-house.jackson/features/2.7.6/xml/features (7)
karaf@root()> feature:install jackson-jaxrs-json-provider jetty (8)

Microservices

Our environment has been configured. Now, we can take a brief look on sample application. It’s really simple. It has only three modules account-cxf, customer-cxf, sample-api. In the sample-api module we have base service interfaces and model objects. In account-cxf and customer-cxf there service implementations and OSGi services declarations in Blueprint file. Sample application source code is available on GitHub. Here’s account service controller class and its interface below.

public class AccountServiceImpl implements AccountService {

	private List<Account> accounts;

	public AccountServiceImpl() {
		accounts = new ArrayList<>();
		accounts.add(new Account(1, "1234567890", 12345, 1));
		accounts.add(new Account(2, "1234567891", 6543, 2));
		accounts.add(new Account(3, "1234567892", 45646, 3));
	}

	public Account findById(Integer id) {
		return accounts.stream().filter(a -> a.getId().equals(id)).findFirst().get();
	}

	public List<Account> findAll() {
		return accounts;
	}

	public Account add(Account account) {
		accounts.add(account);
		account.setId(accounts.size());
		return account;
	}

	@Override
	public List<Account> findAllByCustomerId(Integer customerId) {
		return accounts.stream().filter(a -> a.getCustomerId().equals(customerId)).collect(Collectors.toList());
	}

}

AccountService interface is in sample-api module. We use JAX-RS annotations for declaring REST endpoints.

public interface AccountService {

	@GET
	@Path("/{id}")
	@Produces("application/json")
	public Account findById(@PathParam("id") Integer id);

	@GET
	@Path("/")
	@Produces("application/json")
	public List<Account> findAll();

	@GET
	@Path("/customer/{customerId}")
	@Produces("application/json")
	public List<Account> findAllByCustomerId(@PathParam("customerId") Integer customerId);

	@POST
	@Path("/")
	@Consumes("application/json")
	@Produces("application/json")
	public Account add(Account account);

}

Here you can see OSGi services declaration in the blueprint.xml file. We have declared AccountServiceIpl bean and set that bean as a service for JAX-RS endpoint. Endpoint uses JacksonJsonProvider as data format provider. There is also important OSGi service declaration with AccountService referencing to AccountServiceImpl. This service will be available for other microservices deployed on Karaf container for example, customer-cxf.

    <cxf:bus id="accountRestBus">
    </cxf:bus>

    <bean id="accountServiceImpl" class="pl.piomin.services.cxf.account.service.AccountServiceImpl"/>
    <service ref="accountServiceImpl" interface="pl.piomin.services.cxf.api.AccountService" />

    <jaxrs:server address="/account" id="accountService">
        <jaxrs:serviceBeans>
            <ref component-id="accountServiceImpl" />
        </jaxrs:serviceBeans>
        <jaxrs:features>
            <cxf:logging />
        </jaxrs:features>
        <jaxrs:providers>
        	<bean class="com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider"/>
        </jaxrs:providers>
    </jaxrs:server>

Now, let’s take a look on customer-cxf microservice. Here’s OSGi blueprint of that service. JAX-RS server declaration is pretty similar as for account-cxf. There is only one addition in comparision with earlier presented OSGi blueprint – reference to AccountService. This reference is injected into CustomerServiceImpl.

	<reference id="accountService" 		interface="pl.piomin.services.cxf.api.AccountService" />

	<bean id="customerServiceImpl" 		class="pl.piomin.services.cxf.customer.service.CustomerServiceImpl">
		<property name="accountService" ref="accountService" />
	</bean>

	<jaxrs:server address="/customer" id="customerService">
		<jaxrs:serviceBeans>
			<ref component-id="customerServiceImpl" />
		</jaxrs:serviceBeans>
		<jaxrs:features>
			<cxf:logging />
		</jaxrs:features>
		<jaxrs:providers>
			<bean class="com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider" />
		</jaxrs:providers>
	</jaxrs:server>

CustomerService uses OSGi reference to AccountService in findById method to collect all accounts belonged to the customer with the specified id path parameter and also exposes some other operations.

public class CustomerServiceImpl implements CustomerService {

	private AccountService accountService;

	private List<Customer> customers;

	public CustomerServiceImpl() {
		customers = new ArrayList<>();
		customers.add(new Customer(1, "XXX", "1234567890"));
		customers.add(new Customer(2, "YYY", "1234567891"));
		customers.add(new Customer(3, "ZZZ", "1234567892"));
	}

	@Override
	public Customer findById(Integer id) {
		Customer c = customers.stream().filter(a -> a.getId().equals(id)).findFirst().get();
		c.setAccounts(accountService.findAllByCustomerId(id));
		return c;
	}

	@Override
	public List<Customer> findAll() {
		return customers;
	}

	@Override
	public Customer add(Customer customer) {
		customers.add(customer);
		customer.setId(customers.size());
		return customer;
	}

	public AccountService getAccountService() {
		return accountService;
	}

	public void setAccountService(AccountService accountService) {
		this.accountService = accountService;
	}

}

Each service has packaging type bundle inside pom.xml and uses maven-bundle-plugin during build process. After running mvn clean install on the root project all bundles will be generated in target catalog.You can install them using Apache Felix web console or Karaf command line client in that order: sample-api, account-cxf, customer-cxf.

Testing

Finally, you can see a list of available CXF endpoints on Karaf by calling http://192.168.99.100:8181/cxf in your web browser. Call http://192.168.99.100:8181/cxf/customer/1 to test findById in CustomerService. You should see JSON with customer data and all accounts collected from account microservice.

Conclusion

Treat this post as a short introduction to microsevices conception on Apache Karaf OSGi container. I presented you how to use CXF endpoints on Karaf container as a some kind of service gateway and OSGi services for inter-communication process between deployed microservices. Instead of OSGi reference we could use JAX-RS proxy client for connecting with account service from customer service. You can find some basic examples of that concept on the web. There are also available more advanced solutions for service registration and discovery on Karaf, for example remore service call with Apahce ZooKeeper. I think we will take a closer look on them in subsequent posts.

Launch microservice in Docker container

Docker, Microservices and Continuous Delivery are increasingly popular topics among modern development teams. Today I’m going to create simple microservice and present you how to run it in Docker container using Maven plugin or Jenkins pipeline. Let’s start from application code which is available on https://github.com/piomin/sample-docker-microservice.git. It has only one endpoint for searching all persons and single person by id. Here’s controller code:

package pl.piomin.microservices.person;

import java.util.ArrayList;
import java.util.List;
import java.util.logging.Logger;

import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class Api {

protected Logger logger = Logger.getLogger(Api.class.getName());

private List<Person> persons;

public Api() {
persons = new ArrayList<>();
persons.add(new Person(1, "Jan", "Kowalski", 22));
persons.add(new Person(1, "Adam", "Malinowski", 33));
persons.add(new Person(1, "Tomasz", "Janowski", 25));
persons.add(new Person(1, "Alina", "Iksińska", 54));
}

@RequestMapping("/person")
public List<Person> findAll() {
logger.info("Api.findAll()");
return persons;
}

@RequestMapping("/person/{id}")
public Person findById(@PathVariable("id") Integer id) {
logger.info(String.format("Api.findById(%d)", id));
return persons.stream().filter(p -> (p.getId().intValue() == id)).findAny().get();
}

}

We need to have Docker installed on our machine and Docker Registry container running on port 5000. If you are interested in commercial support, there is also Docker Trusted Registry provides an image registry and same other features like LDAP/Active Directory integration, security certificates.

docker run -d --name registry -p 5000:5000 registry:latest

We use openjdk as a base image for our new microservice image defined in Dockerfile. Application JAR file will be launched in java command and exposed on port 2222.

FROM openjdk
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ADD sample-docker-microservice-1.0-SNAPSHOT.jar person-service.jar
ENTRYPOINT ["java", "-jar", "/person-service.jar"]
EXPOSE 2222

We use docker-maven-plugin to configure image building process inside pom.xml. There is no need for using Dockerfile with that plugin. It has equivalent tags in configuration which could be use instead of Dockerfile entries. Our example is based on Dockerfile.

<plugin>
<groupId>com.spotify</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.4.13</version>
<configuration>
<imageName>${docker.image.prefix}/${project.artifactId}</imageName>
<imageTags>${project.version}</imageTags>
<dockerDirectory>src/main/docker</dockerDirectory>
<dockerHost>https://192.168.99.100:2376</dockerHost>
<dockerCertPath>C:\Users\minkowp\.docker\machine\machines\default</dockerCertPath>
<resources>
<resource>
<targetPath>/</targetPath>
<directory>${project.build.directory}</directory>
<include>${project.build.finalName}.jar</include>
</resource>
</resources>
</configuration>
</plugin>

Finally, we can build our code using Maven command.

mvn clean package docker:build

After running maven command the images is tagged and pushed to local repository.

docker tag e106e5bf3d57 localhost:5000/microservices/sample-docker-microservice:1.0-SNAPSHOT
docker push localhost:5000/microservices/sample-docker-microservice:1.0-SNAPSHOT

Application images now is registered in local Docker Registry. Optionally, we could push it docker.io or to enterprise Docker Trusted Registry. We can check it out using API available at http://192.168.99.100:5000/v2/_catalog. Here’s Docker command for running with newly created image stored in local register. Service is available at http://192.168.99.100:2222/person/.

docker run -d --name sample1 -p 2222:2222 microservice/sample-docker-microservice:1.0-SNAPSHOT

 

How to ship logs with Logstash, Elasticsearch and RabbitMQ

Here’s simple picture of our solution. We’ll start from sample Spring Boot application shipping logs to RabbitMQ exchange. Then using Docker, we’ll configure environment containing RabbitMQ, Logstash, Elasticsearch and Kibana – each running on separated Docker container.

sscg9hyasgmdht1k46653

My sample Java application is available on https://github.com/piomin/sample-amqp-logging.git.

There are only two Spring Boot dependencies needed inside pom.xml. First for REST controller and second for AMQP dependencies.

<dependencies>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-data-rest</artifactId>
	</dependency>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-amqp</artifactId>
	</dependency>
</dependencies>

Here’s simple controller with one logging message.

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class Controller {

 protected Logger logger = LoggerFactory.getLogger(Controller.class.getName());

 @RequestMapping("/hello/{param}")
 public String hello(@PathVariable("param") String param) {
  logger.info("Controller.hello(" + param + ")");
  return "Hello";
 }

}

I use logback as logger implementation and Spring AMQP appender for sending logs to RabbitMQ over AMQP protocol.

<appender name="AMQP" class="org.springframework.amqp.rabbit.logback.AmqpAppender">
	<layout>
		<pattern>
			{
			"time": "%date{ISO8601}",
			"thread": "%thread",
			"level": "%level",
			"class": "%logger{36}",
			"message": "%message"
			}
		</pattern>
	</layout>

	<!-- RabbitMQ connection -->
	<host>192.168.99.100</host>
	<port>30000</port>
	<username>guest</username>
	<password>guest</password>

	<applicationId>api-service-4</applicationId>
	<routingKeyPattern>api-service-4</routingKeyPattern>
	<declareExchange>true</declareExchange>
	<exchangeType>direct</exchangeType>
	<exchangeName>ex_logstash</exchangeName>

	<generateId>true</generateId>
	<charset>UTF-8</charset>
	<durable>true</durable>
	<deliveryMode>PERSISTENT</deliveryMode>
</appender>

I run RabbitMQ server using docker image https://hub.docker.com/_/rabbitmq/. Here’s docker command for it. I choosed rabbitmq:management docker image to enable expose of RabbitMQ UI management console on port 30001. After running this command we can go to management console available on 192.168.99.100:30001. There we have to create queue named q_logstash and direct exchange named ex_logstach having routing set to q_logstash queue.

docker run -d -it --name rabbit --hostname rabbit -p 30000:5672 -p 30001:15672
 rabbitmq:management

 

rabbit
RabbitMQ management console with exchange and queue binding

Then we run Elasticsearch and Kibana docker images. Kibana container need to be linked to elasticsearch.

docker run -d -it --name es -p 9200:9200 -p 9300:9300 elasticsearch
docker run -d -it --name kibana --link es:elasticsearch -p 5601:5601 kibana

Finally we can run Logstash docker image which get RabbitMQ queue as input and set Elasticsearch api as output. We have to change host to docker machine default address and port configured when running RabbitMQ container. Also we have durable queue so it has to be changed because default value for that is false following this reference:https://www.elastic.co/guide/en/logstash/current/plugins-inputs-rabbitmq.html

docker run -d -it --name logstash logstash -e 'input { rabbitmq {
host => "192.168.99.100" port => 30000 durable => true } }
output { elasticsearch { hosts => ["192.168.99.100"] } }'

After running all docker containers for RabbitMQ, Logstash, Elasticsearch and Kibana we can run our sample Spring Boot application and see logs on Kibana available on http://192.168.99.100:5601.