Jenkins nodes on Docker containers

Jenkins is most popular an open source automation server written in Java. It has many interesting plugins and features. Today, I’m going to show you one of them – how to set up Jenkins master server with one slave instance connected to master. So that we will be able to run distributed builds using few docker containers. For that sample we use docker images of Jenkins (jenkins) and Jenkins slave (jenkinsci/jnlp-slave). Let’s start from running Jenkins docker container.

docker run -d --name jenkins -p 50000:50000 -p 50080:8080 jenkins

Go to management console (http://192.168.99.100:50080) and select Manage Jenkins -> Manage Nodes and then click New Node. In the next page you have to put the slave name – for that sample is slave-1. After clicking OK you will see new node on the list. Now, you can configure it by clicking setting button and display node details by clicking node name on the list.

jenkins-slave

New node is created by is still disabled. After clicking node you will see the page with details. The important information is in command secret line property. Copy that token.

jenkins-slave1

Now, we are going to run docker image with JNLP agent. In the docker run command we paste Jenkins master URL, secret token and chosen node name (slave-1). If you would like to set up it without docker container you should download slave agent JAR file by clicking Launch button and run agent from command line like in the picture above.

docker run -d --name jenkins-slave1 jenkinsci/jnlp-slave -url http://192.168.99.100:50080 5d681c12e9c68f14373d62375e852d0874ea9daeca3483df4c858ad3556d406d slave-1

After running slave container you should see name slave-1 in the Build Executor Status below master node.

jenkins-slave2

Now, we could configure sample Jenkins pipeline to test our new slave. Pipeline builds could be ran on master node or on slave node. Here sample pipeline fragment. For trying that sample you need to have Pipeline Plugin installed on your Jenkins server.

node() {
    stage('Checkout') {
        ...
    }

    stage('Build') {
        ...
    }
}

You can select the node for running your pipeline by providing node name. Now, build always run on slave-1 node.

node('slave-1') {
    stage('Checkout') {
        ...
    }

    stage('Build') {
        ...
    }
}
Advertisements

Apache Karaf Microservices

Apache Karaf is a small OSGi based runtime which provides a lightweight container onto which various components and applications can be deployed.

Apache Karaf can be runned as standalone container and provides some enterprise ready features like shell console, remote access, hot deployment, dynamic configuration. It can be the perfect solution for microservices. The idea of microservices on Apache Karaf has already been introduced a few years ago. “What I am promoting is the idea of µServices, the concepts of an OSGi service as a design primitive.” – Peter Kriens March 2010.

Karaf on Docker

First, we need to run docker container with Apache Karaf. Surprisingly, there is no official repository with such an image. I found image on Docker Hub with Karaf here. Unfortunately, there is no port 8181 exposed – default Karaf web port. We will use this image to create our own with 8181 port available outside. Here’s our Dockerfile.

FROM java:8-jdk
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64

ENV KARAF_VERSION=4.0.8

RUN wget http://www-us.apache.org/dist/karaf/${KARAF_VERSION}/apache-karaf-${KARAF_VERSION}.tar.gz; \
    mkdir /opt/karaf; \
    tar --strip-components=1 -C /opt/karaf -xzf apache-karaf-${KARAF_VERSION}.tar.gz; \
    rm apache-karaf-${KARAF_VERSION}.tar.gz; \
    mkdir /deploy; \
    sed -i 's/^\(felix\.fileinstall\.dir\s*=\s*\).*$/\1\/deploy/' /opt/karaf/etc/org.apache.felix.fileinstall-deploy.cfg

VOLUME ["/deploy"]
EXPOSE 1099 8101 8181 44444
ENTRYPOINT ["/opt/karaf/bin/karaf"]

Then, by running docker commands below we are building our image from Dockerfile and starting new Karaf container.

docker build -t karaf-api .
docker run -d --name karaf -p 1099:1099 -p 8101:8101 -p 8181:8181 -p 44444:44444 karaf-api

Now, we can login to new docker container (1). Karaf is installed in /opt/karaf directory. We should run client by calling ./client in /opt/karaf/bin directory (2). Then we should install Apache Felix web console which is by default available under port 8181 (3). You can check it out by calling on web browser http://192.168.99.100:8181/system/console. Default username and password is karaf. In webconsole you can check full list of features installed on our OSGi cantainer. You can also display that list in karaf console using feature:list command (4). After webconsole installation you decide if you prefer using Karaf command line or Apache Felix console for further actions. For our sample application we need to add some OSGi repositories and features. First, we are adding Apache CXF framework repository (5) and its features for http and RESTful web services (6). Then we are adding repository for jackson framework (7) and some jackson and Jetty server features (8).

docker exec -i -t karaf /bin/bash (1)
cd /opt/karaf/bin
./client (2)
karaf@root()> feature:install webconsole (3)
karaf@root()> feature:list (4)
karaf@root()> feature:repo-add cxf 3.1.10 (5)
karaf@root()> feature:install http cxf-jaxrs cxf (6)
karaf@root()> feature:repo-add mvn:org.code-house.jackson/features/2.7.6/xml/features (7)
karaf@root()> feature:install jackson-jaxrs-json-provider jetty (8)

Microservices

Our environment has been configured. Now, we can take a brief look on sample application. It’s really simple. It has only three modules account-cxf, customer-cxf, sample-api. In the sample-api module we have base service interfaces and model objects. In account-cxf and customer-cxf there service implementations and OSGi services declarations in Blueprint file. Sample application source code is available on GitHub. Here’s account service controller class and its interface below.

public class AccountServiceImpl implements AccountService {

	private List<Account> accounts;

	public AccountServiceImpl() {
		accounts = new ArrayList<>();
		accounts.add(new Account(1, "1234567890", 12345, 1));
		accounts.add(new Account(2, "1234567891", 6543, 2));
		accounts.add(new Account(3, "1234567892", 45646, 3));
	}

	public Account findById(Integer id) {
		return accounts.stream().filter(a -> a.getId().equals(id)).findFirst().get();
	}

	public List<Account> findAll() {
		return accounts;
	}

	public Account add(Account account) {
		accounts.add(account);
		account.setId(accounts.size());
		return account;
	}

	@Override
	public List<Account> findAllByCustomerId(Integer customerId) {
		return accounts.stream().filter(a -> a.getCustomerId().equals(customerId)).collect(Collectors.toList());
	}

}

AccountService interface is in sample-api module. We use JAX-RS annotations for declaring REST endpoints.

public interface AccountService {

	@GET
	@Path("/{id}")
	@Produces("application/json")
	public Account findById(@PathParam("id") Integer id);

	@GET
	@Path("/")
	@Produces("application/json")
	public List<Account> findAll();

	@GET
	@Path("/customer/{customerId}")
	@Produces("application/json")
	public List<Account> findAllByCustomerId(@PathParam("customerId") Integer customerId);

	@POST
	@Path("/")
	@Consumes("application/json")
	@Produces("application/json")
	public Account add(Account account);

}

Here you can see OSGi services declaration in the blueprint.xml file. We have declared AccountServiceIpl bean and set that bean as a service for JAX-RS endpoint. Endpoint uses JacksonJsonProvider as data format provider. There is also important OSGi service declaration with AccountService referencing to AccountServiceImpl. This service will be available for other microservices deployed on Karaf container for example, customer-cxf.

    <cxf:bus id="accountRestBus">
    </cxf:bus>

    <bean id="accountServiceImpl" class="pl.piomin.services.cxf.account.service.AccountServiceImpl"/>
    <service ref="accountServiceImpl" interface="pl.piomin.services.cxf.api.AccountService" />

    <jaxrs:server address="/account" id="accountService">
        <jaxrs:serviceBeans>
            <ref component-id="accountServiceImpl" />
        </jaxrs:serviceBeans>
        <jaxrs:features>
            <cxf:logging />
        </jaxrs:features>
        <jaxrs:providers>
        	<bean class="com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider"/>
        </jaxrs:providers>
    </jaxrs:server>

Now, let’s take a look on customer-cxf microservice. Here’s OSGi blueprint of that service. JAX-RS server declaration is pretty similar as for account-cxf. There is only one addition in comparision with earlier presented OSGi blueprint – reference to AccountService. This reference is injected into CustomerServiceImpl.

	<reference id="accountService" 		interface="pl.piomin.services.cxf.api.AccountService" />

	<bean id="customerServiceImpl" 		class="pl.piomin.services.cxf.customer.service.CustomerServiceImpl">
		<property name="accountService" ref="accountService" />
	</bean>

	<jaxrs:server address="/customer" id="customerService">
		<jaxrs:serviceBeans>
			<ref component-id="customerServiceImpl" />
		</jaxrs:serviceBeans>
		<jaxrs:features>
			<cxf:logging />
		</jaxrs:features>
		<jaxrs:providers>
			<bean class="com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider" />
		</jaxrs:providers>
	</jaxrs:server>

CustomerService uses OSGi reference to AccountService in findById method to collect all accounts belonged to the customer with the specified id path parameter and also exposes some other operations.

public class CustomerServiceImpl implements CustomerService {

	private AccountService accountService;

	private List<Customer> customers;

	public CustomerServiceImpl() {
		customers = new ArrayList<>();
		customers.add(new Customer(1, "XXX", "1234567890"));
		customers.add(new Customer(2, "YYY", "1234567891"));
		customers.add(new Customer(3, "ZZZ", "1234567892"));
	}

	@Override
	public Customer findById(Integer id) {
		Customer c = customers.stream().filter(a -> a.getId().equals(id)).findFirst().get();
		c.setAccounts(accountService.findAllByCustomerId(id));
		return c;
	}

	@Override
	public List<Customer> findAll() {
		return customers;
	}

	@Override
	public Customer add(Customer customer) {
		customers.add(customer);
		customer.setId(customers.size());
		return customer;
	}

	public AccountService getAccountService() {
		return accountService;
	}

	public void setAccountService(AccountService accountService) {
		this.accountService = accountService;
	}

}

Each service has packaging type bundle inside pom.xml and uses maven-bundle-plugin during build process. After running mvn clean install on the root project all bundles will be generated in target catalog.You can install them using Apache Felix web console or Karaf command line client in that order: sample-api, account-cxf, customer-cxf.

Testing

Finally, you can see a list of available CXF endpoints on Karaf by calling http://192.168.99.100:8181/cxf in your web browser. Call http://192.168.99.100:8181/cxf/customer/1 to test findById in CustomerService. You should see JSON with customer data and all accounts collected from account microservice.

Conclusion

Treat this post as a short introduction to microsevices conception on Apache Karaf OSGi container. I presented you how to use CXF endpoints on Karaf container as a some kind of service gateway and OSGi services for inter-communication process between deployed microservices. Instead of OSGi reference we could use JAX-RS proxy client for connecting with account service from customer service. You can find some basic examples of that concept on the web. There are also available more advanced solutions for service registration and discovery on Karaf, for example remore service call with Apahce ZooKeeper. I think we will take a closer look on them in subsequent posts.

Microservices with Apache Camel

Apache Camel, as usual, is a step backwards in comparion with Spring framework and there is no difference in the case of microservices architecture. However, Camel have introduced new set of components for building microservices some months ago. In its newest version 2.18 there is a support for load balancing with Netflix Ribbon, circuit breaking with Netflix Hystrix, distributed tracing with Zipkin and service registration and discovery with Consul. The new key component for microservices support on Camel is ServiceCall EIP which allows to call a remote service in a distributed system where the service is looked up from a service registry. There are four tools which can be used as service registry for Apache Camel: etcd, Kubernetes, Ribbon and Consul. Release 2.18 also comes with a much-improved Spring Boot support.

In this articale I’m going to show you how to develop microservices in Camel with its support for Spring Boot, REST DSL and Consul. Sample application is available on GitHub. Below you see a picture with our application architecture.

camel-arch

To enable Spring Boot support in Camel application we need to add following dependency to pom.xml. After that we have to annotate our main class with @SpringBootApplication and set property camel.springboot.main-run-controller=true in application configuration file (application.properties or application.yml).

<dependency>
	<groupId>org.apache.camel</groupId>
	<artifactId>camel-spring-boot-starter</artifactId>
	<version>${camel.version}</version>
</dependency>

Then we just have to create Spring @Component extending Camel’s RouteBuilder. Inside route builder configuration we declare REST endpoint using Camel REST DSL. It’s really simple and intuitive. In the code visible below I exposed four REST endpoints: three for GET method and an single one for POST.  We are using netty4-http component as a web container for exposing REST endpoints and JSON binding. We also have to add to dependencies to pom.xml: camel-netty4-http for Netty framework and camel-jackson library for enabling consuming and producing JSON data. All routes are forwarding input requests to different methods inside Spring service @Component.

@Component
public class AccountRoute extends RouteBuilder {

	@Value("${port}")
	private int port;

	@Override
	public void configure() throws Exception {
		restConfiguration()
			.component("netty4-http")
			.bindingMode(RestBindingMode.json)
			.port(port);

		rest("/account")
			.get("/{id}")
				.to("bean:accountService?method=findById(${header.id})")
			.get("/customer/{customerId}")
				.to("bean:accountService?method=findByCustomerId(${header.customerId})")
			.get("/")
				.to("bean:accountService?method=findAll")
			.post("/").consumes("application/json").type(Account.class)
				.to("bean:accountService?method=add(${body})");
	}

}

Next element in our architecture is service registry component. We decided to use Consul. The simplest way to run it locally is to pull its docker image and run using docker command below. Consul provides UI management console and REST API for registering and searching services and key/value objects. REST API is available under v1 path and is well documented here.

docker run -d --name consul -p 8500:8500 -p 8600:8600 consul

Well, we have account microservice implemented and running Consul instance, so we would like to register our service there. And here we’ve got a problem. There is no mechanisms out of the box in Camel for service registration, there is only component for searching service. To be more precise I didn’t find any description about such a mechanism in Camel documentation… However, it may exists… somewhere. Maybe, you know how to find it? Here’s interesting solution for Camel Consul registry, but I didn’t check it out. I decided to rather simpler solution implemented by myself. I added two next routes to AccountRoute class.

from("direct:start").marshal().json(JsonLibrary.Jackson)
	.setHeader(Exchange.HTTP_METHOD, constant("PUT"))
	.setHeader(Exchange.CONTENT_TYPE, constant("application/json"))
	.to("http://192.168.99.100:8500/v1/agent/service/register");
from("direct:stop").shutdownRunningTask(ShutdownRunningTask.CompleteAllTasks)
	.toD("http://192.168.99.100:8500/v1/agent/service/deregister/${header.id}");

Route direct:start is running after Camel context startup and direct:stop before shutdown. Here’s EventNotifierSupport implementation for calling routes during startup and shutdown process. You can also try with camel-consul component, but in my opinion it is not well described in Camel documentation. List of services registered on Consul is available here: http://192.168.99.100:8500/v1/agent/services. I launch my account service with VM argument -Dport and it should be registered on Consul with account${port} ID.

@Component
public class EventNotifier extends EventNotifierSupport {

	@Value("${port}")
	private int port;

	@Override
	public void notify(EventObject event) throws Exception {
		if (event instanceof CamelContextStartedEvent) {
			CamelContext context = ((CamelContextStartedEvent) event).getContext();
			ProducerTemplate t = context.createProducerTemplate();
			t.sendBody("direct:start", new Register("account" + port, "account", "127.0.0.1", port));
		}
		if (event instanceof CamelContextStoppingEvent) {
			CamelContext context = ((CamelContextStoppingEvent) event).getContext();
			ProducerTemplate t = context.createProducerTemplate();
			t.sendBodyAndHeader("direct:stop", null, "id", "account" + port);
		}
	}

	@Override
	public boolean isEnabled(EventObject event) {
		return (event instanceof CamelContextStartedEvent || event instanceof CamelContextStoppingEvent);
	}

}

The last (but not least) element of our architecture is gateway. We also use netty for exposing REST services on port 8000.

restConfiguration()
	.component("netty4-http")
	.bindingMode(RestBindingMode.json)
	.port(8000);

We also have to provide configuration for connection with Consul registry and set it on CamelContext calling setServiceCallConfiguration method.

ConsulConfigurationDefinition config = new ConsulConfigurationDefinition();
config.setComponent("netty4-http");
config.setUrl("http://192.168.99.100:8500");
context.setServiceCallConfiguration(config);

Finally, we are defining routes which are mapping paths set on gateway to services registered on Consul using ServiceCall EIP. Now you call in your web browser one of those URLs, for example http://localhost:8000/account/1. If you would like to map path also while serviceCall EIP you need to put ‘//‘ instead of sinle slash ‘/‘ described in the Camel documentation. For example from(“rest:get:account”).serviceCall(“account//all”), not serviceCall(“account/all”).

from("rest:get:account:/{id}").serviceCall("account");
from("rest:get:account:/customer/{customerId}").serviceCall("account");
from("rest:get:account:/").serviceCall("account");
from("rest:post:account:/").serviceCall("account");

Conclusion

I was positively surprised by Camel. Before I started working on the sample described in this post I didn’t expect that Camel has such many features for building microservice solutions and working with them will be simple and fast. Of cource I can also find some disadvantages like inaccuracies or errors in documentation, only short description of some new components in developer guide or no registration process in discovery server like Consul. In these areas, I see an advantage of Spring Framework. But on the other hand Camel has support for some useful tools like etcd or Kubernetes which is not available in Spring. In conclusion, I’m looking forward to further improvements in Camel components for building microservices.

RabbitMQ in cluster

RabbitMQ grown into the most popular message broker software. It is written in Erlang and implements Advanced Message Queueing Protocol (AMQP). It is easy to use and configure even if we are talking about such mechanisms as clustering or high availibility. In this post I’m going to show you how to run some instances of RabbitMQ provided in docker containers in the cluster with highly available (HA) queues. Based on the sample Java application we’ll see how to send and receive messages from the RabbitMQ cluster and check how this message broker handles a large number of incoming messages. Sample Spring Boot application is available on GitHub. Here is picture ilustrating architecture of the presented solution.

rabbitmq-cluster

We use docker official repository of RabbitMQ. Here are commands for running three RabbitMQ nodes. First node is the master of cluster – two other nodes will join him. We use container management to enable an UI administration console for each node. Every node has default connection and UI management ports exposed. Important thing is to link rabbit2 and rabbit3 constainers to rabbit1, which is necessary while joining to cluster mastering by rabbit1.

docker run -d --hostname rabbit1 --name rabbit1 -e RABBITMQ_ERLANG_COOKIE='rabbitcluster' -p 30000:5672 -p 30001:15672 rabbitmq:management
docker run -d --hostname rabbit2 --name rabbit2 --link rabbit1:rabbit1 -e RABBITMQ_ERLANG_COOKIE='rabbitcluster' -p 30002:5672 -p 30003:15672 rabbitmq:management
docker run -d --hostname rabbit3 --name rabbit3 --link rabbit1:rabbit1 -e RABBITMQ_ERLANG_COOKIE='rabbitcluster' -p 30004:5672 -p 30005:15672 rabbitmq:management

Ok, now there are three RabbitMQ running instances. We can go to the UI management console for all of those instances available as docker containers, for example http://192.168.99.100:30001 (rabbitmq). Each instance is available on its independent cluster like we see in the pictures below. We would like to make all instances working in same cluster rabbit@rabbit1.

rabbit_cluster

rabbit_cluster2

Here’s set of commands run on rabbit2 instance for joining cluster rabbit@rabbit1. The same set should be run on rabbit3 node. In the beginning we have to connect to docker container and run bash command. Before running rabbitmq join_cluster command we have to stop broker.

docker exec -i -t rabbit2 \bash
root@rabbit2:/# rabbitmqctl stop_app
Stopping node rabbit@rabbit2 ...
root@rabbit2:/# rabbitmqctl join_cluster rabbit@rabbit1
Clustering node rabbit@rabbit2 with rabbit@rabbit1 ...
root@rabbit2:/# rabbitmqctl start_app
Starting node rabbit@rabbit2 ...

If everything was successful we should see cluster name rabbit@rabbit1 in upper right corner of rabbit2 management console. You should also see list of running nodes in the Nodes section. You can also check cluster status by running on every node command rabbitmqctl cluster_status, which should also display list of all cluster nodes.

rabbit_cluster3

After starting all nodes go to UI managent console on one of nodes. Now we are going to configure High Availibility for selected queue. It is not important which node you choose, because they are in one cluster. In the Queues tab create queue with name q.example. Then go to Admin tab and select Policies section and create new policy. In the picture below you can see policy I have created. I selected ha-mode=all which means that is mirrored across all nodes in the cluster and when new node is added to the cluster, the queue will be mirrored to that node. There are also available exactly, nodes modes – more about RabbitMQ High Availibility you can find here. In pattern field enter your queue name and in apply to select Queues. If everything was succeded you should see ha-all feature in queue row.

rabbit_cluster5.png

One of the greatest advantage of RabbitMQ is monitoring. You can see many statistics like memory, disk usage, I/O statistics, detailed message rates, graphs etc. Some of them you could see below.

rabbit_cluster6

rabbit_cluster7

RabbitMQ has a great support in Spring framework. There many projects in which use RabbitMQ implementation by default, for example Spring Cloud Stream, Spring Cloud Sleuth. I’m going to show you sample Spring Boot application that sends messages to RabbitMQ cluster and receives them from HA queue. Application source code is available on GitHub. Here’s main class of application. We enable RabbitMQ listener by declaring @EnableRabbit on class and @RabbitListener on receiving method. We also have to declare listened queue, broker connection factory and listener container factory to allow listener concurrency. Inside CachingConnectionFactory we set all three addresses of RabbitMQ cluster instances: 192.168.99.100:30000, 192.168.99.100:30002, 192.168.99.100:30004.

@SpringBootApplication
@EnableRabbit
public class Listener {

	private static Logger logger = Logger.getLogger("Listener");

	public static void main(String[] args) {
		SpringApplication.run(Listener.class, args);
	}

	@RabbitListener(queues = "q.example")
	public void onMessage(Order order) {
		logger.info(order.toString());
	}

	@Bean
	public ConnectionFactory connectionFactory() {
		CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
		connectionFactory.setUsername("guest");
		connectionFactory.setPassword("guest");
		connectionFactory.setAddresses("192.168.99.100:30000,192.168.99.100:30002,192.168.99.100:30004");
		connectionFactory.setChannelCacheSize(10);
		return connectionFactory;
	}

	@Bean
	public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() {
		SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
		factory.setConnectionFactory(connectionFactory());
		factory.setConcurrentConsumers(10);
		factory.setMaxConcurrentConsumers(20);
		return factory;
	}

	@Bean
	public Queue queue() {
		return new Queue("q.example");
	}

}

Conclusion

Clustering and High Availibility configuration with RabbitMQ is pretty simple. I like Rabbit MQ for support in the cluster monitoring process with UI management console. In my opinion it is user friendly and intuitive. In the sample application I send 100k messages into the sample queue. Using 20 concurrent consumers they were processed 65 seconds (~80/s per consumer thread) and memory usage at its peak was about 400MB on each node. Of cource our application is just receiving object message and logging it in console.

Microservices security with Oauth2

Preface

One of the most important aspects to consider when exposing a public access API consisting of many microservices is security. Spring has some interesting features and frameworks which makes configuration of our microservices security easier. In this article I’m going to show you how to use Spring Cloud and Oauth2 to provide token access security behind API gateway.

Theory

OAuth2 standard is currently used by all the major websites that allow you to access their resources through the shared API. It is an open authorization standard allowing users to share their private resources stored in one page to another page without having to go into the service of their credentials. These are basic terms related to oauth2.

  • Resource Owner – dispose of access to the resource
  • Resource Server – server that stores the owner’s resources that can be shared using special token
  • Authorization Server – manages the allocation of keys, tokens and other temporary resource access codes. It also has to ensure that access is granted to the relevant person
  • Access Token – the key that allows access to a resource
  • Authorization Grant – grants permission for access. There are different ways to confirm access: authorization code, implicit, resource owner password credentials, and client credentials

You can read more about this standard here and in this digitalocean article. The flow of this protocol has three main steps. In the begining we authorization request is sent to Resource Owner. After response from Resource Owner we send authorization grant request to Authorization Server and receive access token. Finally, we send this access token to Resource Server and if it is valid the API serves the resource to the application.

Our solution

The picture below shows architecture of our sample. We have API Gateway (Zuul) which proxies our requests to authorization server and two instances of account microservice. Authorization server is some kind of infrastructure service which provides outh2 security mechanisms. We also have discovery service (Eureka) where all of our microservices are registered.

sec-micro

Gateway

For our sample we won’t provide any security on API gateway. It just has to proxy requests from clients to authorization server and account microservices. In the Zuul’s gateway configuration visible below we set sensitiveHeaders property on empty value to enable Authorization HTTP header forward. By default Zuul cut that header while forwarding our request to the target API which is incorrect because of the basic authorization demanded by our services behind gateway.

zuul:
  routes:
    uaa:
      path: /uaa/**
      sensitiveHeaders:
      serviceId: auth-server
    account:
      path: /account/**
      sensitiveHeaders:
      serviceId: account-service

Main class inside gateway source code is very simple. It only has to enable Zuul proxy feature and discovery client for collecting services from Eureka registry.

@SpringBootApplication
@EnableZuulProxy
@EnableDiscoveryClient
public class GatewayServer {

	public static void main(String[] args) {
		SpringApplication.run(GatewayServer.class, args);
	}

}

Authorization Server

Our authorization server is as simple as possible. It based on default Spring security configuration. Client authorization details are stored in an in-memory repository. Of cource in the production mode you would like to use other implementations instead of in-memory repository like JDBC datasource and token store. You can read more about Spring authorization mechanisms in Spring Security Reference and Spring Boot Security. Here’s fragment of configuration from application.yml. We provided user basic authentication data and basic security credentials for the /token endpoint: client-id and client-secret. The user credentials are the normal Spring Security user details.

security:
  user:
    name: root
    password: password
  oauth2:
    client:
      client-id: acme
      client-secret: secret

Here’s main class of our authentication server with @EnableAuthorizationServer. We also exposed one REST endpoint with user authentication details for account service and enabled Eureka registration and discovery for clients.

@SpringBootApplication
@EnableAuthorizationServer
@EnableDiscoveryClient
@EnableResourceServer
@RestController
public class AuthServer {

	public static void main(String[] args) {
		SpringApplication.run(AuthServer.class, args);
	}

	@RequestMapping("/user")
	public Principal user(Principal user) {
		return user;
	}

}

Application – account microservice

Our sample microservice has only one endpoint for @GET request which always returns the same account. In main class resource server and Eureka discovery are enabled. Service configuration is trivial. Sample application source code is available on GitHub.

@SpringBootApplication
@EnableDiscoveryClient
@EnableResourceServer
public class AccountService {

	public static void main(String[] args) {
		SpringApplication.run(AccountService.class, args);
	}

}
security:
  user:
    name: root
    password: password
  oauth2:
    resource:
      loadBalanced: true
      userInfoUri: http://localhost:9999/user

Testing

We only need web browser and REST client (for example Chrome Advanced REST client) to test our solution. Let’s start from sending authorization request to resource owner. We can call oauth2 authorize endpoint via Zuul gateway in the web browser.

http://localhost:8765/uaa/oauth/authorize?response_type=token&client_id=acme&redirect_uri=http://example.com&scope=openid&state=48532

After sending this request we should see page below. Select Approve and click Authorize for requests an access token from the authorization server. If the application identity is authenticated and the authorization grant is valid an access token to the application should be returned in the HTTP response.

oauth2

http://example.com/#access_token=b1acaa35-1ebd-4995-987d-56ee1c0619e5&token_type=bearer&state=48532&expires_in=43199

And the final step is to call account endpoint using access token. We had to put it into Authorization header as bearer token. In the sample application logging level for security operation is set to TRACE so you can easily find out what happened if something goes wrong.

call

Conclusion

To be honest I’m not very familiar with security issues in applications. So one very important thing for me is the simplicity of security solution I decided to use. In Spring Security we have almost all needed mechanisms out of the box. It also provides components which can be easily extendable for more advanced requirements. You should treat this article as a brief introduction to more advanced solutions using Spring Cloud and Spring Security projects.

Reactive microservices with Spring 5

Spring team has announced support for reactive programming model from 5.0 release. New Spring version will probably be released on March. Fortunately, milestone and snapshot versions with these changes are now available on public spring repositories. There is new Spring Web Reactive project with support for reactive @Controller and also new WebClient with client-side reactive support. Today I’m going to take a closer look on solutions suggested by Spring team.

Following Spring WebFlux documentation  the Spring Framework uses Reactor internally for its own reactive support. Reactor is a Reactive Streams implementation that further extends the basic Reactive Streams Publisher contract with the Flux and Mono composable API types to provide declarative operations on data sequences of 0..N and 0..1. On the server-side Spring supports annotation based and functional programming models. Annotation model use @Controller and the other annotations supported also with Spring MVC. Reactive controller will be very similar to standard REST controller for synchronous services instead of it uses Flux, Mono and Publisher objects. Today I’m going to show you how to develop simple reactive microservices using annotation model and MongoDB reactive module. Sample application source code is available on GitHub.

For our example we need to use snapshots of Spring Boot 2.0.0 and Spring Web Reactive 0.1.0. Here are main pom.xml fragment and single microservice pom.xml below. In our microservices we use Netty instead of default Tomcat server.

	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.0.0.BUILD-SNAPSHOT</version>
	</parent>
	<dependencyManagement>
		<dependencies>
			<dependency>
				<groupId>org.springframework.boot.experimental</groupId>
				<artifactId>spring-boot-dependencies-web-reactive</artifactId>
				<version>0.1.0.BUILD-SNAPSHOT</version>
				<type>pom</type>
				<scope>import</scope>
			</dependency>
		</dependencies>
	</dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-data-mongodb-reactive</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot.experimental</groupId>
			<artifactId>spring-boot-starter-web-reactive</artifactId>
			<exclusions>
				<exclusion>
					<groupId>org.springframework.boot</groupId>
					<artifactId>spring-boot-starter-tomcat</artifactId>
				</exclusion>
			</exclusions>
		</dependency>
		<dependency>
			<groupId>io.projectreactor.ipc</groupId>
			<artifactId>reactor-netty</artifactId>
		</dependency>
		<dependency>
			<groupId>io.netty</groupId>
			<artifactId>netty-all</artifactId>
		</dependency>
		<dependency>
			<groupId>pl.piomin.services</groupId>
			<artifactId>common</artifactId>
			<version>${project.version}</version>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>io.projectreactor.addons</groupId>
			<artifactId>reactor-test</artifactId>
			<scope>test</scope>
		</dependency>
	</dependencies>

We have two microservices: account-service and customer-service. Each of them have its own MongoDB database and they are exposing simple reactive API for searching and saving data. Also customer-service interacting with account-service to get all customer accounts and return them in customer-service method. Here’s our account controller code.

@RestController
public class AccountController {

	@Autowired
	private AccountRepository repository;

	@GetMapping(value = "/account/customer/{customer}")
	public Flux<Account> findByCustomer(@PathVariable("customer") Integer customerId) {
		return repository.findByCustomerId(customerId)
				.map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
	}

	@GetMapping(value = "/account")
	public Flux<Account> findAll() {
		return repository.findAll().map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
	}

	@GetMapping(value = "/account/{id}")
	public Mono<Account> findById(@PathVariable("id") Integer id) {
		return repository.findById(id)
				.map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
	}

	@PostMapping("/person")
	public Mono<Account> create(@RequestBody Publisher<Account> accountStream) {
		return repository
				.save(Mono.from(accountStream)
						.map(a -> new pl.piomin.services.account.model.Account(a.getNumber(), a.getCustomerId(),
								a.getAmount())))
				.map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
	}

}

In all API methods we also perform mapping from Account entity (MongoDB @Document) to Account DTO available in our common module. Here’s account repository class. It uses ReactiveMongoTemplate for interacting with Mongo collections.

@Repository
public class AccountRepository {

	@Autowired
	private ReactiveMongoTemplate template;

	public Mono<Account> findById(Integer id) {
		return template.findById(id, Account.class);
	}

	public Flux<Account> findAll() {
		return template.findAll(Account.class);
	}

	public Flux<Account> findByCustomerId(String customerId) {
		return template.find(query(where("customerId").is(customerId)), Account.class);
	}

	public Mono<Account> save(Mono<Account> account) {
		return template.insert(account);
	}

}

In our Spring Boot main or @Configuration class we should declare spring beans for MongoDB with connection settings.

@SpringBootApplication
public class Application {

	public static void main(String[] args) {
		SpringApplication.run(Application.class, args);
	}

	public @Bean MongoClient mongoClient() {
		return MongoClients.create("mongodb://192.168.99.100");
	}

	public @Bean ReactiveMongoTemplate reactiveMongoTemplate() {
		return new ReactiveMongoTemplate(mongoClient(), "account");
	}

}

I used docker MongoDB container for working on this sample.

docker run -d --name mongo -p 27017:27017 mongo

In customer service we call endpoint /account/customer/{customer} from account service. I declared @Bean WebClient in our main class.

	public @Bean WebClient webClient() {
		return WebClient.builder().clientConnector(new ReactorClientHttpConnector()).baseUrl("http://localhost:2222").build();
	}

Here’s customer controller fragment. @Autowired WebClient calls account service after getting customer from MongoDB.

	@Autowired
	private WebClient webClient;

	@GetMapping(value = "/customer/accounts/{pesel}")
	public Mono<Customer> findByPeselWithAccounts(@PathVariable("pesel") String pesel) {
		return repository.findByPesel(pesel).flatMap(customer -> webClient.get().uri("/account/customer/{customer}", customer.getId()).accept(MediaType.APPLICATION_JSON)
				.exchange().flatMap(response -> response.bodyToFlux(Account.class))).collectList().map(l -> {return new Customer(pesel, l);});
	}

We can test GET calls using web browser or REST clients. With POST it’s not so simple. Here are two simple test cases for adding new customer and getting customer with accounts. Test getCustomerAccounts need account service running on port 2222.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class CustomerTest {

	private static final Logger logger = Logger.getLogger("CustomerTest");

	private WebClient webClient;

	@LocalServerPort
	private int port;

	@Before
	public void setup() {
		this.webClient = WebClient.create("http://localhost:" + this.port);
	}

	@Test
	public void getCustomerAccounts() {
		Customer customer = this.webClient.get().uri("/customer/accounts/234543647565")
				.accept(MediaType.APPLICATION_JSON).exchange().then(response -> response.bodyToMono(Customer.class))
				.block();
		logger.info("Customer: " + customer);
	}

	@Test
	public void addCustomer() {
		Customer customer = new Customer(null, "Adam", "Kowalski", "123456787654");
		customer = webClient.post().uri("/customer").accept(MediaType.APPLICATION_JSON)
				.exchange(BodyInserters.fromObject(customer)).then(response -> response.bodyToMono(Customer.class))
				.block();
		logger.info("Customer: " + customer);
	}

}

Conclusion

Spring initiative with support for reactive programming seems promising, but now it’s on early stage of development. There is no availibility to use it with popular projects from Spring Cloud like Eureka, Ribbon or Hystrix. When I tried to add this dependencies to pom.xml my service failed to start. I hope that in the near future such functionalities like service discovery and load balancing will be available also for reactive microservices same as for synchronous REST microservices. Spring has also support for reactive model in Spring Cloud Stream project. It’s more stable than WebFlux framework. I’ll try use it in the future.

Event driven microservices using Spring Cloud Stream and RabbitMQ

Before we start let’s look at site Spring Cloud Quick Start. There is a list of spring-cloud releases available grouped as release trains. We use the newest release Camden.SR5 with 1.4.4.RELEASE Spring Boot and Brooklyn.SR2 Spring Cloud Stream version.

<parent>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-parent</artifactId>
	<version>1.4.4.RELEASE</version>
</parent>
<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Camden.SR5</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

Here’s our architecture visualization. Order service sends message to RabbitMQ topic exchange. Product and shipment services listen on that topic for incoming order messages and then process them. After processing they send reply message to the topic on which payment service listens to. Payment service stores incoming messages aggregating reply from product and shipment services, then count prices and sends final response.

sample1

Each service has the following dependencies. We have sample-common module where object for messages sent to topics are stored. They’re shared between all services. We’re also using Spring Cloud Sleuth for distributed tracing with one request id between all microservices.

	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-stream</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-stream-binder-rabbit</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-starter-sleuth</artifactId>
		</dependency>
		<dependency>
			<groupId>pl.piomin.services</groupId>
			<artifactId>sample-common</artifactId>
			<version>${project.version}</version>
		</dependency>
	</dependencies>

Let me start with a few words on the theoretical aspects of Spring Cloud stream. Here’s short reference of that framework Spring Cloud Stream Reference Guide. It’s based on Spring Integration. It provides three predefined interfaces out of the box:

  • Source  – can be used for an application which has a single outbound channel
  • Sink – can be used for an application which has a single inbound channel
  • Processor – can be used for an application which has both an inbound channel and an outbound channel

I’m going to show you sample usage of all of these interfaces. In order service we’re using Source class. Using @InboundChannelAdapter and @Poller annotations we’are sending order message to output once per 10 seconds.

@SpringBootApplication
@EnableBinding(Source.class)
public class Application {

	protected Logger logger = Logger.getLogger(Application.class.getName());

	private int index = 0;

	public static void main(String[] args) {
		SpringApplication.run(Application.class, args);
	}

	@Bean
	@InboundChannelAdapter(value = Source.OUTPUT, poller = @Poller(fixedDelay = "10000", maxMessagesPerPoll = "1"))
	public MessageSource<Order> orderSource() {
		return () -> {
			Order o = new Order(index++, OrderType.PURCHASE, LocalDateTime.now(), OrderStatus.NEW, new Product(), new Shipment());
			logger.info("Sending order: " + o);
			return new GenericMessage<>(o);
		};
	}

	@Bean
	public AlwaysSampler defaultSampler() {
	  return new AlwaysSampler();
	}

}

Here’s output configuration in application.yml file.

spring:
  cloud:
    stream:
      bindings:
        input:
          destination: ex.stream.in
          binder: rabbit1
        output:
          destination: ex.stream.out
          binder: rabbit1
      binders:
        rabbit1:
          type: rabbit
          environment:
            spring:
              rabbitmq:
                host: 192.168.99.100
                port: 30000
                username: guest
                password: guest

Product and shipment services use Processor interface. They listen on stream input and after processing send message to their outputs.

@SpringBootApplication
@EnableBinding(Processor.class)
public class Application {

	@Autowired
	private ProductService productService;

	protected Logger logger = Logger.getLogger(Application.class.getName());

	public static void main(String[] args) {
		SpringApplication.run(Application.class, args);
	}

	@StreamListener(Processor.INPUT)
	@SendTo(Processor.OUTPUT)
	public Order processOrder(Order order) {
		logger.info("Processing order: " + order);
		return productService.processOrder(order);
	}

	@Bean
	public AlwaysSampler defaultSampler() {
		return new AlwaysSampler();
	}

}

Here’s service configuration. It listens on order service output exchange and also defines its group named product. That group name will be used for automatic queue creation and exchange binding on RabbitMQ. There is also output exchange defined.

spring:
  cloud:
    stream:
      bindings:
        input:
          destination: ex.stream.out
          group: product
          binder: rabbit1
        output:
          destination: ex.stream.out2
          binder: rabbit1
      binders:
        rabbit1:
          type: rabbit
          environment:
            spring:
              rabbitmq:
                host: 192.168.99.100
                port: 30000
                username: guest
                password: guest

We use docker container for running RabbitMQ instance.

docker run -d --name rabbit1 -p 30001:15672 -p 30000:5672 rabbitmq:management

Let’s look at the management console. It’s available on http://192.168.99.100:30001. Here’s ex.stream.out topic exchange configuration. Below we see the list of declared queues.

rabbit1

rabbit2

Here’s main application class from payment service. We use Sink interface for listening on incoming messages. Input order is processed and we print final price of order sent by order service. Sample application source code is available on GitHub.

@SpringBootApplication
@EnableBinding(Sink.class)
public class Application {

	@Autowired
	private PaymentService paymentService;

	protected Logger logger = Logger.getLogger(Application.class.getName());

	public static void main(String[] args) {
		SpringApplication.run(Application.class, args);
	}

	@StreamListener(Sink.INPUT)
	public void processOrder(Order order) {
		logger.info("Processing order: " + order);
		Order o = paymentService.processOrder(order);
		if (o != null)
			logger.info("Final response: " + (o.getProduct().getPrice() + o.getShipment().getPrice()));
	}

	@Bean
	public AlwaysSampler defaultSampler() {
		return new AlwaysSampler();
	}

}

By using @Bean AlwaysSampler in every main class of our microservices we propagate one trace and span id between all calls of single order. Here’s fragment from our microservices logging console. And also I get the following warning message which is not understable for me: ‘Deprecated trace headers detected. Please upgrade Sleuth to 1.1 or start sending headers present in the TraceMessageHeaders class’. Version 1.1.2.RELEASE of Spring Cloud Sleuth is not applicable Camden.SR5 release?

logs