Microservices security with Oauth2

Preface

One of the most important aspects to consider when exposing a public access API consisting of many microservices is security. Spring has some interesting features and frameworks which makes configuration of our microservices security easier. In this article I’m going to show you how to use Spring Cloud and Oauth2 to provide token access security behind API gateway.

Theory

OAuth2 standard is currently used by all the major websites that allow you to access their resources through the shared API. It is an open authorization standard allowing users to share their private resources stored in one page to another page without having to go into the service of their credentials. These are basic terms related to oauth2.

  • Resource Owner – dispose of access to the resource
  • Resource Server – server that stores the owner’s resources that can be shared using special token
  • Authorization Server – manages the allocation of keys, tokens and other temporary resource access codes. It also has to ensure that access is granted to the relevant person
  • Access Token – the key that allows access to a resource
  • Authorization Grant – grants permission for access. There are different ways to confirm access: authorization code, implicit, resource owner password credentials, and client credentials

You can read more about this standard here and in this digitalocean article. The flow of this protocol has three main steps. In the begining we authorization request is sent to Resource Owner. After response from Resource Owner we send authorization grant request to Authorization Server and receive access token. Finally, we send this access token to Resource Server and if it is valid the API serves the resource to the application.

Our solution

The picture below shows architecture of our sample. We have API Gateway (Zuul) which proxies our requests to authorization server and two instances of account microservice. Authorization server is some kind of infrastructure service which provides outh2 security mechanisms. We also have discovery service (Eureka) where all of our microservices are registered.

sec-micro

Gateway

For our sample we won’t provide any security on API gateway. It just has to proxy requests from clients to authorization server and account microservices. In the Zuul’s gateway configuration visible below we set sensitiveHeaders property on empty value to enable Authorization HTTP header forward. By default Zuul cut that header while forwarding our request to the target API which is incorrect because of the basic authorization demanded by our services behind gateway.

zuul:
  routes:
    uaa:
      path: /uaa/**
      sensitiveHeaders:
      serviceId: auth-server
    account:
      path: /account/**
      sensitiveHeaders:
      serviceId: account-service

Main class inside gateway source code is very simple. It only has to enable Zuul proxy feature and discovery client for collecting services from Eureka registry.

@SpringBootApplication
@EnableZuulProxy
@EnableDiscoveryClient
public class GatewayServer {

	public static void main(String[] args) {
		SpringApplication.run(GatewayServer.class, args);
	}

}

Authorization Server

Our authorization server is as simple as possible. It based on default Spring security configuration. Client authorization details are stored in an in-memory repository. Of cource in the production mode you would like to use other implementations instead of in-memory repository like JDBC datasource and token store. You can read more about Spring authorization mechanisms in Spring Security Reference and Spring Boot Security. Here’s fragment of configuration from application.yml. We provided user basic authentication data and basic security credentials for the /token endpoint: client-id and client-secret. The user credentials are the normal Spring Security user details.

security:
  user:
    name: root
    password: password
  oauth2:
    client:
      client-id: acme
      client-secret: secret

Here’s main class of our authentication server with @EnableAuthorizationServer. We also exposed one REST endpoint with user authentication details for account service and enabled Eureka registration and discovery for clients.

@SpringBootApplication
@EnableAuthorizationServer
@EnableDiscoveryClient
@EnableResourceServer
@RestController
public class AuthServer {

	public static void main(String[] args) {
		SpringApplication.run(AuthServer.class, args);
	}

	@RequestMapping("/user")
	public Principal user(Principal user) {
		return user;
	}

}

Application – account microservice

Our sample microservice has only one endpoint for @GET request which always returns the same account. In main class resource server and Eureka discovery are enabled. Service configuration is trivial. Sample application source code is available on GitHub.

@SpringBootApplication
@EnableDiscoveryClient
@EnableResourceServer
public class AccountService {

	public static void main(String[] args) {
		SpringApplication.run(AccountService.class, args);
	}

}
security:
  user:
    name: root
    password: password
  oauth2:
    resource:
      loadBalanced: true
      userInfoUri: http://localhost:9999/user

Testing

We only need web browser and REST client (for example Chrome Advanced REST client) to test our solution. Let’s start from sending authorization request to resource owner. We can call oauth2 authorize endpoint via Zuul gateway in the web browser.

http://localhost:8765/uaa/oauth/authorize?response_type=token&client_id=acme&redirect_uri=http://example.com&scope=openid&state=48532

After sending this request we should see page below. Select Approve and click Authorize for requests an access token from the authorization server. If the application identity is authenticated and the authorization grant is valid an access token to the application should be returned in the HTTP response.

oauth2

http://example.com/#access_token=b1acaa35-1ebd-4995-987d-56ee1c0619e5&token_type=bearer&state=48532&expires_in=43199

And the final step is to call account endpoint using access token. We had to put it into Authorization header as bearer token. In the sample application logging level for security operation is set to TRACE so you can easily find out what happened if something goes wrong.

call

Conclusion

To be honest I’m not very familiar with security issues in applications. So one very important thing for me is the simplicity of security solution I decided to use. In Spring Security we have almost all needed mechanisms out of the box. It also provides components which can be easily extendable for more advanced requirements. You should treat this article as a brief introduction to more advanced solutions using Spring Cloud and Spring Security projects.

Advertisements

Reactive microservices with Spring 5

Spring team has announced support for reactive programming model from 5.0 release. New Spring version will probably be released on March. Fortunately, milestone and snapshot versions with these changes are now available on public spring repositories. There is new Spring Web Reactive project with support for reactive @Controller and also new WebClient with client-side reactive support. Today I’m going to take a closer look on solutions suggested by Spring team.

Following Spring WebFlux documentation  the Spring Framework uses Reactor internally for its own reactive support. Reactor is a Reactive Streams implementation that further extends the basic Reactive Streams Publisher contract with the Flux and Mono composable API types to provide declarative operations on data sequences of 0..N and 0..1. On the server-side Spring supports annotation based and functional programming models. Annotation model use @Controller and the other annotations supported also with Spring MVC. Reactive controller will be very similar to standard REST controller for synchronous services instead of it uses Flux, Mono and Publisher objects. Today I’m going to show you how to develop simple reactive microservices using annotation model and MongoDB reactive module. Sample application source code is available on GitHub.

For our example we need to use snapshots of Spring Boot 2.0.0 and Spring Web Reactive 0.1.0. Here are main pom.xml fragment and single microservice pom.xml below. In our microservices we use Netty instead of default Tomcat server.

	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.0.0.BUILD-SNAPSHOT</version>
	</parent>
	<dependencyManagement>
		<dependencies>
			<dependency>
				<groupId>org.springframework.boot.experimental</groupId>
				<artifactId>spring-boot-dependencies-web-reactive</artifactId>
				<version>0.1.0.BUILD-SNAPSHOT</version>
				<type>pom</type>
				<scope>import</scope>
			</dependency>
		</dependencies>
	</dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-data-mongodb-reactive</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot.experimental</groupId>
			<artifactId>spring-boot-starter-web-reactive</artifactId>
			<exclusions>
				<exclusion>
					<groupId>org.springframework.boot</groupId>
					<artifactId>spring-boot-starter-tomcat</artifactId>
				</exclusion>
			</exclusions>
		</dependency>
		<dependency>
			<groupId>io.projectreactor.ipc</groupId>
			<artifactId>reactor-netty</artifactId>
		</dependency>
		<dependency>
			<groupId>io.netty</groupId>
			<artifactId>netty-all</artifactId>
		</dependency>
		<dependency>
			<groupId>pl.piomin.services</groupId>
			<artifactId>common</artifactId>
			<version>${project.version}</version>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>io.projectreactor.addons</groupId>
			<artifactId>reactor-test</artifactId>
			<scope>test</scope>
		</dependency>
	</dependencies>

We have two microservices: account-service and customer-service. Each of them have its own MongoDB database and they are exposing simple reactive API for searching and saving data. Also customer-service interacting with account-service to get all customer accounts and return them in customer-service method. Here’s our account controller code.

@RestController
public class AccountController {

	@Autowired
	private AccountRepository repository;

	@GetMapping(value = "/account/customer/{customer}")
	public Flux<Account> findByCustomer(@PathVariable("customer") Integer customerId) {
		return repository.findByCustomerId(customerId)
				.map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
	}

	@GetMapping(value = "/account")
	public Flux<Account> findAll() {
		return repository.findAll().map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
	}

	@GetMapping(value = "/account/{id}")
	public Mono<Account> findById(@PathVariable("id") Integer id) {
		return repository.findById(id)
				.map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
	}

	@PostMapping("/person")
	public Mono<Account> create(@RequestBody Publisher<Account> accountStream) {
		return repository
				.save(Mono.from(accountStream)
						.map(a -> new pl.piomin.services.account.model.Account(a.getNumber(), a.getCustomerId(),
								a.getAmount())))
				.map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
	}

}

In all API methods we also perform mapping from Account entity (MongoDB @Document) to Account DTO available in our common module. Here’s account repository class. It uses ReactiveMongoTemplate for interacting with Mongo collections.

@Repository
public class AccountRepository {

	@Autowired
	private ReactiveMongoTemplate template;

	public Mono<Account> findById(Integer id) {
		return template.findById(id, Account.class);
	}

	public Flux<Account> findAll() {
		return template.findAll(Account.class);
	}

	public Flux<Account> findByCustomerId(String customerId) {
		return template.find(query(where("customerId").is(customerId)), Account.class);
	}

	public Mono<Account> save(Mono<Account> account) {
		return template.insert(account);
	}

}

In our Spring Boot main or @Configuration class we should declare spring beans for MongoDB with connection settings.

@SpringBootApplication
public class Application {

	public static void main(String[] args) {
		SpringApplication.run(Application.class, args);
	}

	public @Bean MongoClient mongoClient() {
		return MongoClients.create("mongodb://192.168.99.100");
	}

	public @Bean ReactiveMongoTemplate reactiveMongoTemplate() {
		return new ReactiveMongoTemplate(mongoClient(), "account");
	}

}

I used docker MongoDB container for working on this sample.

docker run -d --name mongo -p 27017:27017 mongo

In customer service we call endpoint /account/customer/{customer} from account service. I declared @Bean WebClient in our main class.

	public @Bean WebClient webClient() {
		return WebClient.builder().clientConnector(new ReactorClientHttpConnector()).baseUrl("http://localhost:2222").build();
	}

Here’s customer controller fragment. @Autowired WebClient calls account service after getting customer from MongoDB.

	@Autowired
	private WebClient webClient;

	@GetMapping(value = "/customer/accounts/{pesel}")
	public Mono<Customer> findByPeselWithAccounts(@PathVariable("pesel") String pesel) {
		return repository.findByPesel(pesel).flatMap(customer -> webClient.get().uri("/account/customer/{customer}", customer.getId()).accept(MediaType.APPLICATION_JSON)
				.exchange().flatMap(response -> response.bodyToFlux(Account.class))).collectList().map(l -> {return new Customer(pesel, l);});
	}

We can test GET calls using web browser or REST clients. With POST it’s not so simple. Here are two simple test cases for adding new customer and getting customer with accounts. Test getCustomerAccounts need account service running on port 2222.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class CustomerTest {

	private static final Logger logger = Logger.getLogger("CustomerTest");

	private WebClient webClient;

	@LocalServerPort
	private int port;

	@Before
	public void setup() {
		this.webClient = WebClient.create("http://localhost:" + this.port);
	}

	@Test
	public void getCustomerAccounts() {
		Customer customer = this.webClient.get().uri("/customer/accounts/234543647565")
				.accept(MediaType.APPLICATION_JSON).exchange().then(response -> response.bodyToMono(Customer.class))
				.block();
		logger.info("Customer: " + customer);
	}

	@Test
	public void addCustomer() {
		Customer customer = new Customer(null, "Adam", "Kowalski", "123456787654");
		customer = webClient.post().uri("/customer").accept(MediaType.APPLICATION_JSON)
				.exchange(BodyInserters.fromObject(customer)).then(response -> response.bodyToMono(Customer.class))
				.block();
		logger.info("Customer: " + customer);
	}

}

Conclusion

Spring initiative with support for reactive programming seems promising, but now it’s on early stage of development. There is no availibility to use it with popular projects from Spring Cloud like Eureka, Ribbon or Hystrix. When I tried to add this dependencies to pom.xml my service failed to start. I hope that in the near future such functionalities like service discovery and load balancing will be available also for reactive microservices same as for synchronous REST microservices. Spring has also support for reactive model in Spring Cloud Stream project. It’s more stable than WebFlux framework. I’ll try use it in the future.

Event driven microservices using Spring Cloud Stream and RabbitMQ

Before we start let’s look at site Spring Cloud Quick Start. There is a list of spring-cloud releases available grouped as release trains. We use the newest release Camden.SR5 with 1.4.4.RELEASE Spring Boot and Brooklyn.SR2 Spring Cloud Stream version.

<parent>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-parent</artifactId>
	<version>1.4.4.RELEASE</version>
</parent>
<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Camden.SR5</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

Here’s our architecture visualization. Order service sends message to RabbitMQ topic exchange. Product and shipment services listen on that topic for incoming order messages and then process them. After processing they send reply message to the topic on which payment service listens to. Payment service stores incoming messages aggregating reply from product and shipment services, then count prices and sends final response.

sample1

Each service has the following dependencies. We have sample-common module where object for messages sent to topics are stored. They’re shared between all services. We’re also using Spring Cloud Sleuth for distributed tracing with one request id between all microservices.

	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-stream</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-stream-binder-rabbit</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-starter-sleuth</artifactId>
		</dependency>
		<dependency>
			<groupId>pl.piomin.services</groupId>
			<artifactId>sample-common</artifactId>
			<version>${project.version}</version>
		</dependency>
	</dependencies>

Let me start with a few words on the theoretical aspects of Spring Cloud stream. Here’s short reference of that framework Spring Cloud Stream Reference Guide. It’s based on Spring Integration. It provides three predefined interfaces out of the box:

  • Source  – can be used for an application which has a single outbound channel
  • Sink – can be used for an application which has a single inbound channel
  • Processor – can be used for an application which has both an inbound channel and an outbound channel

I’m going to show you sample usage of all of these interfaces. In order service we’re using Source class. Using @InboundChannelAdapter and @Poller annotations we’are sending order message to output once per 10 seconds.

@SpringBootApplication
@EnableBinding(Source.class)
public class Application {

	protected Logger logger = Logger.getLogger(Application.class.getName());

	private int index = 0;

	public static void main(String[] args) {
		SpringApplication.run(Application.class, args);
	}

	@Bean
	@InboundChannelAdapter(value = Source.OUTPUT, poller = @Poller(fixedDelay = "10000", maxMessagesPerPoll = "1"))
	public MessageSource<Order> orderSource() {
		return () -> {
			Order o = new Order(index++, OrderType.PURCHASE, LocalDateTime.now(), OrderStatus.NEW, new Product(), new Shipment());
			logger.info("Sending order: " + o);
			return new GenericMessage<>(o);
		};
	}

	@Bean
	public AlwaysSampler defaultSampler() {
	  return new AlwaysSampler();
	}

}

Here’s output configuration in application.yml file.

spring:
  cloud:
    stream:
      bindings:
        input:
          destination: ex.stream.in
          binder: rabbit1
        output:
          destination: ex.stream.out
          binder: rabbit1
      binders:
        rabbit1:
          type: rabbit
          environment:
            spring:
              rabbitmq:
                host: 192.168.99.100
                port: 30000
                username: guest
                password: guest

Product and shipment services use Processor interface. They listen on stream input and after processing send message to their outputs.

@SpringBootApplication
@EnableBinding(Processor.class)
public class Application {

	@Autowired
	private ProductService productService;

	protected Logger logger = Logger.getLogger(Application.class.getName());

	public static void main(String[] args) {
		SpringApplication.run(Application.class, args);
	}

	@StreamListener(Processor.INPUT)
	@SendTo(Processor.OUTPUT)
	public Order processOrder(Order order) {
		logger.info("Processing order: " + order);
		return productService.processOrder(order);
	}

	@Bean
	public AlwaysSampler defaultSampler() {
		return new AlwaysSampler();
	}

}

Here’s service configuration. It listens on order service output exchange and also defines its group named product. That group name will be used for automatic queue creation and exchange binding on RabbitMQ. There is also output exchange defined.

spring:
  cloud:
    stream:
      bindings:
        input:
          destination: ex.stream.out
          group: product
          binder: rabbit1
        output:
          destination: ex.stream.out2
          binder: rabbit1
      binders:
        rabbit1:
          type: rabbit
          environment:
            spring:
              rabbitmq:
                host: 192.168.99.100
                port: 30000
                username: guest
                password: guest

We use docker container for running RabbitMQ instance.

docker run -d --name rabbit1 -p 30001:15672 -p 30000:5672 rabbitmq:management

Let’s look at the management console. It’s available on http://192.168.99.100:30001. Here’s ex.stream.out topic exchange configuration. Below we see the list of declared queues.

rabbit1

rabbit2

Here’s main application class from payment service. We use Sink interface for listening on incoming messages. Input order is processed and we print final price of order sent by order service. Sample application source code is available on GitHub.

@SpringBootApplication
@EnableBinding(Sink.class)
public class Application {

	@Autowired
	private PaymentService paymentService;

	protected Logger logger = Logger.getLogger(Application.class.getName());

	public static void main(String[] args) {
		SpringApplication.run(Application.class, args);
	}

	@StreamListener(Sink.INPUT)
	public void processOrder(Order order) {
		logger.info("Processing order: " + order);
		Order o = paymentService.processOrder(order);
		if (o != null)
			logger.info("Final response: " + (o.getProduct().getPrice() + o.getShipment().getPrice()));
	}

	@Bean
	public AlwaysSampler defaultSampler() {
		return new AlwaysSampler();
	}

}

By using @Bean AlwaysSampler in every main class of our microservices we propagate one trace and span id between all calls of single order. Here’s fragment from our microservices logging console. And also I get the following warning message which is not understable for me: ‘Deprecated trace headers detected. Please upgrade Sleuth to 1.1 or start sending headers present in the TraceMessageHeaders class’. Version 1.1.2.RELEASE of Spring Cloud Sleuth is not applicable Camden.SR5 release?

logs

How to setup Continuous Delivery environment

I have already read some interesting articles and books about Continuous Delivery, because I had to setup it inside my organization. The last document about this subject I can recommend is DZone Guide to DevOps. If you interested in this area of software development it can be really enlightening reading for you. The main purpose of my article is to show rather practical site of Continuous Delivery – tools which can be used to build such environment. I’m going to show how to build professional Continuous Delivery environment using:

  • Jenkins – most popular open source automation server
  • GitLab – web-based Git repository manager
  • Artifactory – open source Maven repository manager
  • Ansible – simple open source automation engine
  • SonarQube – open source platform for continuous code quality

Here’s picture showing our continuous delivery environment.

continuous_delivery

The changes pushed to Git repository managed by GitLab server are automatically propagated to Jenkins using webhook. We enable push and merge request triggers. SSL verification will be disabled. In the URL field we have to put jenkins pipeline address with authentication credentials (user and password) and secret token. This API token which is visible in jenkins user profile under Configure tab.

webhookHere’s jenkins pipeline configuration in ‘Build triggers’ section. We have to enable option ‘Build when a change is pushed to GitLab‘. GitLab CI Service URL is the address we have already set in GitLab webhook configuration. There are push and merge request enabled from all branches. It can also be added additional restriction for branch filtering: by name or by regex. To support such kind of trigger in jenkins you need have Gitlab plugin installed.

jenkins

There are two options of events which trigger jenkins build:

  • push – change in source code is pushed to git repository
  • merge request –  change in source code is pushed to one branch and then committer creates merge request to the build branch from GitLab management console

In case you would like to use first option you have to disable build branch protection to enable direct push to that branch. In case of using merge request branch protection need to be activated.

protection

Merge request from GitLab console is very intuitive. Under section ‘Merge request’ we are selecting source and target branch and confirm action.

merge

Ok, many words about GitLab and Jenkins integration… Now you know how to configure it. You only have to decide if you prefer push or merge request in your continuous delivery configuration. Merge request is used for code review in Gitlab – so it is useful additional step in your continuous pipeline. Let’s move on. We have to install some other plugins in jenkins to integrate it with Artifactory, SonarQube and Ansible. Here’s the full list of jenkins plugins I used for continuous delivery process inside my organization:

Here’s configuration on my jenkins pipeline for sample maven project.

node {

    withEnv(["PATH+MAVEN=${tool 'Maven3'}bin"]) {

        stage('Checkout') {
            def branch = env.gitlabBranch
            env.branch = branch
            git url: 'http://172.16.42.157/minkowp/start.git', credentialsId: '5693747c-2f45-4557-ada2-a1da9bbfe0af', branch: branch
        }

        stage('Test') {
            def pom = readMavenPom file: 'pom.xml'
            print "Build: " + pom.version
            env.POM_VERSION = pom.version
            sh 'mvn clean test -Dmaven.test.failure.ignore=true'
            junit '**/target/surefire-reports/TEST-*.xml'
            currentBuild.description = "v${pom.version} (${env.branch})"
        }

        stage('QA') {
            withSonarQubeEnv('sonar') {
                sh 'mvn org.sonarsource.scanner.maven:sonar-maven-plugin:3.2:sonar'
            }
        }

        stage('Build') {
            def server = Artifactory.server "server1"
            def buildInfo = Artifactory.newBuildInfo()
            def rtMaven = Artifactory.newMavenBuild()
            rtMaven.tool = 'Maven3'
            rtMaven.deployer releaseRepo:'libs-release-local', snapshotRepo:'libs-snapshot-local', server: server
            rtMaven.resolver releaseRepo:'remote-repos', snapshotRepo:'remote-repos', server: server
            rtMaven.run pom: 'pom.xml', goals: 'clean install -Dmaven.test.skip=true', buildInfo: buildInfo
            publishBuildInfo server: server, buildInfo: buildInfo
        }

        stage('Deploy') {
            dir('ansible') {
                ansiblePlaybook playbook: 'preprod.yml'
            }
            mail from: 'ci@example.com', to: 'piotr.minkowski@play.pl', subject: "Nowa wersja start: '${env.POM_VERSION}'", body: "Wdrożono nowa wersję start '${env.POM_VERSION}' na środowisku preprodukcyjnym."
        }

    }
}

There are five stages in my pipeline:

  1. Checkout – source code checkout from git branch. Branch name is sent as parameter by GitLab webhook
  2. Test – running JUnit test and creating test report visible in jenkins and changing job description
  3. QA – running source code scanning using SonarQube scanner
  4. Build – build package resolving artifacts from Artifactory and publishing new application release to Artifactory
  5. Deploy – deploying application package and configuration on server using ansible

Following Ansible website it is a simple automation language that can perfectly describe an IT application infrastructure. It’s easy-to-learn, self-documenting, and doesn’t require a grad-level computer science degree to read. Ansible using SSH keys to authenticate on the remote host. So you have to put your SSH key to authorized_keys file in the remote host before running ansible commands on it. The main idea of that that is to create playbook with set of ansible commands. Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process. Here is catalog structure with ansible configuration for application deploy.

start_ansible

 

 

 

 

 

 

Here’s my ansible playbook code. It defines remote host, user to connect and role name. This file is used inside jenkins pipeline on ansiblePlaybook step.

---
- hosts: pBPreprod
  remote_user: default

  roles:
    - preprod

Here’s main.yml file where we define set of ansible commands to on remote server.

---
- block:
  - name: Copy configuration file
    template: src=config.yml.j2 dest=/opt/start/config.yml

  - name: Copy jar file
    copy: src=../target/start.jar dest=/opt/start/start.jar

  - name: Run jar file
    shell: java -jar /opt/start/start.jar

You can check out build results on jenkins console. There is also fine pipeline visualization with stage execution time. Each build history record has link to Artifactory build information and SonarQube scanner report.

jenkins

Continuous configuration management with Jenkins and Liquibase

An important aspect of Continuous Delivery is application configuration management. Configuration is often stored in the database, especially for more complex business applications. The ability to automatically update changes and rollback them in case of new application version rollback seems to be very important for devops teams. Recently, I had an opportunity to use powerful tool for tracking, managing and applying database schema changes – Liquibase. This tool has many interesting features like advanced support for rollback, tagging and filtering changes to run. It can be used from Maven, Spring, Jenkins and has hibernate support. Today, I’m going to show you how to use liquibase for changes update and rollback in database using maven plugin and also jenkins plugin.

Sample code is available at Github. We use liquibase-maven-plugin for calling liquibase during maven build execution. Here’s plugin configuration in pom.xml.

<build>
<plugins>
<plugin>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-maven-plugin</artifactId>
<version>3.5.3</version>
<configuration>
<propertyFile>src/main/resources/liquibase.properties</propertyFile>
</configuration>
<executions>
<execution>
<phase>process-resources</phase>
<goals>
<goal>update</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>

Here is properties configuration file with database settings and liquibase changeset location.

changeLogFile: src/main/script/changelog-master.xml
driver: com.mysql.jdbc.Driver
url: jdbc:mysql://192.168.99.100:33306/default?useSSL=false
username: default
password: default
verbose: true
dropFirst: false

In the changelog-master.xml file database changes are listed.  It is XML based, but there is also support for YAML, JSON, SQL and even groovy language. Following liquibase.org site the best practise is to organize your changelogs by major release.

<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.1.xsd">

<include file="src/main/script/changelog-1.0.xml" />
<include file="src/main/script/changelog-1.1.xml" />
<include file="src/main/script/changelog-1.2.xml" />

</databaseChangeLog> 

Here’s changelog-1.0.xml. We’re going to create one table person with some columns and insert there some example rows. Inside rollback tag we place configuration for rolling back those changes – drop of newly created table.

<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog/1.9"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog/1.9 http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-1.9.xsd">

<changeSet author="minkowski" id="1.0">
<createTable tableName="person">
<column name="id" type="int">
<constraints primaryKey="true" />
</column>
<column name="first_name" type="varchar(20)" />
<column name="last_name" type="varchar(50)" />
<column name="age" type="int" />
</createTable>
<addAutoIncrement tableName="person" columnName="id" columnDataType="int" />
<rollback>
<dropTable tableName="person" />
</rollback>
</changeSet>

</databaseChangeLog> 

We apply our changes to database by running maven command on the project. It’s also important to place mysql-connector dependency in pom.xml to enable MySQL driver in the project classpath.

mvn package liquibase:update

Now if your build was succesfully finished you can check out changes which were commited in database. There also should been created table DATABASECHANGELOG with history of changes performed using liquibase. The changes can be rollbacked using  mvn command. You can set rollback date, number of versions to rollback or tag name to roll the database back to.

mvn package liquibase:rollback

There is also support for liquibase in Jenkins provided by liquibase-runner plugin. It has pipeline support from 1.2.0 version. First you need to download this plugin Manage Jenkins -> Manage plugins section. Then you call it from your pipeline. Here are example pipelines for update and rollback changes.

node {
stage('Checkout') {
git url: 'https://github.com/piomin/sample-liquibase-maven.git', credentialsId: 'piomin_gitlab', branch: 'master'
}

stage('Update') {
liquibaseUpdate changeLogFile: 'src/main/script/changelog-master.xml', url: 'jdbc:mysql://192.168.99.100:33306/default?useSSL=false', credentialsId: 'mysql_default', databaseEngine: 'MySQL'
}
}
node {
stage('Checkout') {
git url: 'https://github.com/piomin/sample-liquibase-maven.git', credentialsId: 'piomin_gitlab', branch: 'master'
}

stage('Rollback') {
liquibaseRollback changeLogFile: 'src/main/script/changelog-master.xml', url: 'jdbc:mysql://192.168.99.100:33306/default?useSSL=false', credentialsId: 'mysql_default', databaseEngine: 'MySQL', rollbackCount: 2
}
}

In case if someone is not very familiar with jenkins  – credentialsId need to be configured in jenkins ‘Credentials’ section like in the picture below and call by ID inside pipeline. In the first step of each pipeline named ‘Checkout’ we are cloning Git repository from github.com. In the second we are calling methods of liquibase jenkins plugin and passing arguments same as we set in properties file for maven plugin. We are calling liquibaseRollback method with rollbackCount=2, which means that two versions of changeset will be rollback 1.2 and 1.1 from my sample configuration available on github.

jenkins

Unfortunately liquibase-runner using has not support for Oracle database engine in pipeline. But I hope it will be fixed in the future: issue reported by me 🙂

Launch microservice in Docker container

Docker, Microservices and Continuous Delivery are increasingly popular topics among modern development teams. Today I’m going to create simple microservice and present you how to run it in Docker container using Maven plugin or Jenkins pipeline. Let’s start from application code which is available on https://github.com/piomin/sample-docker-microservice.git. It has only one endpoint for searching all persons and single person by id. Here’s controller code:

package pl.piomin.microservices.person;

import java.util.ArrayList;
import java.util.List;
import java.util.logging.Logger;

import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class Api {

protected Logger logger = Logger.getLogger(Api.class.getName());

private List<Person> persons;

public Api() {
persons = new ArrayList<>();
persons.add(new Person(1, "Jan", "Kowalski", 22));
persons.add(new Person(1, "Adam", "Malinowski", 33));
persons.add(new Person(1, "Tomasz", "Janowski", 25));
persons.add(new Person(1, "Alina", "Iksińska", 54));
}

@RequestMapping("/person")
public List<Person> findAll() {
logger.info("Api.findAll()");
return persons;
}

@RequestMapping("/person/{id}")
public Person findById(@PathVariable("id") Integer id) {
logger.info(String.format("Api.findById(%d)", id));
return persons.stream().filter(p -> (p.getId().intValue() == id)).findAny().get();
}

}

We need to have Docker installed on our machine and Docker Registry container running on port 5000. If you are interested in commercial support, there is also Docker Trusted Registry provides an image registry and same other features like LDAP/Active Directory integration, security certificates.

docker run -d --name registry -p 5000:5000 registry:latest

We use openjdk as a base image for our new microservice image defined in Dockerfile. Application JAR file will be launched in java command and exposed on port 2222.

FROM openjdk
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ADD sample-docker-microservice-1.0-SNAPSHOT.jar person-service.jar
ENTRYPOINT ["java", "-jar", "/person-service.jar"]
EXPOSE 2222

We use docker-maven-plugin to configure image building process inside pom.xml. There is no need for using Dockerfile with that plugin. It has equivalent tags in configuration which could be use instead of Dockerfile entries. Our example is based on Dockerfile.

<plugin>
<groupId>com.spotify</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.4.13</version>
<configuration>
<imageName>${docker.image.prefix}/${project.artifactId}</imageName>
<imageTags>${project.version}</imageTags>
<dockerDirectory>src/main/docker</dockerDirectory>
<dockerHost>https://192.168.99.100:2376</dockerHost>
<dockerCertPath>C:\Users\minkowp\.docker\machine\machines\default</dockerCertPath>
<resources>
<resource>
<targetPath>/</targetPath>
<directory>${project.build.directory}</directory>
<include>${project.build.finalName}.jar</include>
</resource>
</resources>
</configuration>
</plugin>

Finally, we can build our code using Maven command.

mvn clean package docker:build

After running maven command the images is tagged and pushed to local repository.

docker tag e106e5bf3d57 localhost:5000/microservices/sample-docker-microservice:1.0-SNAPSHOT
docker push localhost:5000/microservices/sample-docker-microservice:1.0-SNAPSHOT

Application images now is registered in local Docker Registry. Optionally, we could push it docker.io or to enterprise Docker Trusted Registry. We can check it out using API available at http://192.168.99.100:5000/v2/_catalog. Here’s Docker command for running with newly created image stored in local register. Service is available at http://192.168.99.100:2222/person/.

docker run -d --name sample1 -p 2222:2222 microservice/sample-docker-microservice:1.0-SNAPSHOT

 

Part 1: Creating microservice using Spring Cloud, Eureka and Zuul

Spring framework provides set of libraries for creating microservices in Java. They are a part of Spring Cloud project. Today I’m going to show you how to create simple microservices using Spring Boot and following technologies:

  • Zuul –  gateway service that provides dynamic routing, monitoring, resiliency, security, and more
  • Ribbon – client side load balancer
  • Feign – declarative REST client
  • Eureka – service registration and discovery
  • Sleuth – distributed tracing via logs
  • Zipkin – distributed tracing system with request visualization.

Sample application is available at https://github.com/piomin/sample-spring-microservices.git. Here’s picture with application architecture. Client calls endpoint available inside customer-service which stores basic customer data via Zuul gateway. This endpoint interacts with account-service to collect information about customer accounts served by endpoint in account-service. Each service registering itself on Eureka discovery service and sending its logs to Zipkin using spring-cloud-sleuth.

san1s57hfsas5v53ms53

This is account-service controller. We use findByCustomer method for collecting customer accounts by his id.

@RestController
public class Api {
	private List<Account> accounts;

	protected Logger logger = Logger.getLogger(Api.class.getName());

	public Api() {
		accounts = new ArrayList<>();
		accounts.add(new Account(1, 1, "111111"));
		accounts.add(new Account(2, 2, "222222"));
		accounts.add(new Account(3, 3, "333333"));
		accounts.add(new Account(4, 4, "444444"));
		accounts.add(new Account(5, 1, "555555"));
		accounts.add(new Account(6, 2, "666666"));
		accounts.add(new Account(7, 2, "777777"));
	}

	@RequestMapping("/accounts/{number}")
	public Account findByNumber(@PathVariable("number") String number) {
		logger.info(String.format("Account.findByNumber(%s)", number));
		return accounts.stream().filter(it -> it.getNumber().equals(number)).findFirst().get();
	}

	@RequestMapping("/accounts/customer/{customer}")
	public List<Account> findByCustomer(@PathVariable("customer") Integer customerId) {
		logger.info(String.format("Account.findByCustomer(%s)", customerId));
		return accounts.stream().filter(it -> it.getCustomerId().intValue()==customerId.intValue()).collect(Collectors.toList());
	}

	@RequestMapping("/accounts")
	public List<Account> findAll() {
		logger.info("Account.findAll()");
		return accounts;
	}
}

This is customer-service controller. There is findById method which interacts with account-service using Feign client.

@RestController
public class Api {

	@Autowired
	private AccountClient accountClient;

	protected Logger logger = Logger.getLogger(Api.class.getName());

	private List<Customer> customers;

	public Api() {
		customers = new ArrayList<>();
		customers.add(new Customer(1, "12345", "Adam Kowalski", CustomerType.INDIVIDUAL));
		customers.add(new Customer(2, "12346", "Anna Malinowska", CustomerType.INDIVIDUAL));
		customers.add(new Customer(3, "12347", "Paweł Michalski", CustomerType.INDIVIDUAL));
		customers.add(new Customer(4, "12348", "Karolina Lewandowska", CustomerType.INDIVIDUAL));
	}

	@RequestMapping("/customers/pesel/{pesel}")
	public Customer findByPesel(@PathVariable("pesel") String pesel) {
		logger.info(String.format("Customer.findByPesel(%s)", pesel));
		return customers.stream().filter(it -> it.getPesel().equals(pesel)).findFirst().get();
	}

	@RequestMapping("/customers")
	public List<Customer> findAll() {
		logger.info("Customer.findAll()");
		return customers;
	}

	@RequestMapping("/customers/{id}")
	public Customer findById(@PathVariable("id") Integer id) {
		logger.info(String.format("Customer.findById(%s)", id));
		Customer customer = customers.stream().filter(it -> it.getId().intValue()==id.intValue()).findFirst().get();
		List<Account> accounts = accountClient.getAccounts(id);
		customer.setAccounts(accounts);
		return customer;
	}
}
@FeignClient("account-service")
public interface AccountClient {

	@RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}")
	List<Account> getAccounts(@PathVariable("customerId") Integer customerId);

}

To be able to using Feign client we only have to enable it in our main class.


package pl.piomin.microservices.customer;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.feign.EnableFeignClients;

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
public class Application {

	public static void main(String[] args) {
		SpringApplication.run(Application.class, args);
	}
}

There is also important configuration inside application.yml in customer-service. The ribbon load balancer needs to be enabled and also I suggest to set lease reneval and expiration on Eureka client to enable unregistration from discovery service when our service is shutting down.


server:
	port: ${PORT:3333}

eureka:
	client:
		serviceUrl:
			defaultZone: ${vcap.services.discovery-service.credentials.uri:http://127.0.0.1:8761}/eureka/
	instance:
		leaseRenewalIntervalInSeconds: 1
		leaseExpirationDurationInSeconds: 2

ribbon:
	eureka:
		enabled: true

Ok, fine. We’ve got our two microservices implemented and configured. But first we have to create and run discovery service based on Eureka server. This functionality is provided by our discovery-service. We only have to import one dependency in our pom.xml called spring-cloud-starter-eureka-server and enable it in application main class using @EnableEurekaServer annotation. Here is configuration of Eureka server in application.yml file:


server:
	port: ${PORT:8761}

eureka:
	instance:
		hostname: localhost
	client:
		registerWithEureka: false
		fetchRegistry: false
	server:
		enableSelfPreservation: false

After running discovery-service we see its monitoring console available on 8761 port. And now let’s run our two microservices on default ports set in their application.yml configuration file and more two instances of them on another ports using -DPORT VM argument, for example account-service on port 2223, and customer-service on port 3334. Now we take o look on Eureka monitoring console: we’ve got two instances of account-service running on 2222, 2223 ports and two instances of customer-service running on 3333, 3334 ports.

eureka

We have two instances of each microservice registered on discovery server. But we need to hide our system complexity to the outside world. There should be only one IP address exposed on one port available for inbound clients. That’s why we need API gateway – Zuul. Zuul will forward our request to the specific microservice based on its proxy configuration. Such request will also be load balances by ribbon client. To enable Zuul gateway dependency spring-cloud-starter-zuul should be added inside pom.xml and annotation @EnableZuulProxy in the main class. This is Zuul configuration for ourservices set in application.yml.


server:
	port: 8765

zuul:
	prefix: /api
	routes:
		account:
			path: /account/**
			serviceId: account-service
		customer:
			path: /customer/**
			serviceId: customer-service 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>

...

Like we see Zuul is configured to be available under its default port 8765 and it forwards requests from /api/account/ path to account-service and from /api/customer/ to customer-service. When URL http://localhost:8765/api/customer/customers/1 is call several times we’ll see its load balanced between two instances of each microservice. Also when we shut down one of microservice instance we can take o look that it is unregistered from Eureka server.

In the second part of article I’ll present how to use Spring Cloud Sleuth, Zipkin and ELK. If you are interested in see Part 2: Creating microservices – monitoring with Spring Cloud Sleuth, ELK and Zipkin.