Exposing Microservices over REST Protocol Buffers

Today exposing RESTful API with JSON protocol is the most common standard. We can find many articles describing advantages and disadvantages of JSON versus XML. Both these protocols exchange messages in text format. If an important aspect affecting to the choice of communication protocol in your systems is performance you should definitely pay attention to Protocol Buffers. It is a binary format created by Google as:

A language-neutral, platform-neutral, extensible way of serializing structured data for use in communications protocols, data storage, and more.

Protocol Buffers, which is sometimes referred as Protobuf is not only a message format but also a set of language rules that define the structure of messages. It is extremely useful in service to service communication what has been very well described in that article Beating JSON performance with Protobuf. In that example Protobuf was about 5 times faster than JSON for tests based on Spring Boot framework.

Introduction to Protocol Buffers can be found here. My sample is similar to previous samples from my weblog – it is based on two microservices account and customer which calls one of account’s endpoint. Let’s begin from message types definition provided inside .proto file. Place your .proto file in src/main/proto directory. Here’s account.proto defined in account service. We set java_package and java_outer_classname to define package and name of Java generated class. Message definition syntax is pretty intuitive. Account object generated from that file has three properties id, customerId and number. There is also Accounts object which wrappes list of Account objects.

syntax = "proto3";

package model;

option java_package = "pl.piomin.services.protobuf.account.model";
option java_outer_classname = "AccountProto";

message Accounts {
	repeated Account account = 1;
}

message Account {

	int32 id = 1;
	string number = 2;
	int32 customer_id = 3;

}

Here’s .proto file definition from customer service. It a little more complicated than the previous one from account service. In addition to its definitions it contains definitions of account service messages, because they are used by @Feign client.

syntax = "proto3";

package model;

option java_package = "pl.piomin.services.protobuf.customer.model";
option java_outer_classname = "CustomerProto";

message Accounts {
	repeated Account account = 1;
}

message Account {

	int32 id = 1;
	string number = 2;
	int32 customer_id = 3;

}

message Customers {
	repeated Customer customers = 1;
}

message Customer {

	int32 id = 1;
	string pesel = 2;
	string name = 3;
	CustomerType type = 4;
	repeated Account accounts = 5;

	enum CustomerType {
		INDIVIDUAL = 0;
		COMPANY = 1;
	}

}

We generate source code from the message definitions above by using protobuf-maven-plugin maven plugin. Plugin needs to have protocExecutable file location set. It can be downloaded from Google’s Protocol Buffer download site.

<plugin>
	<groupId>org.xolstice.maven.plugins</groupId>
	<artifactId>protobuf-maven-plugin</artifactId>
	<version>0.5.0</version>
	<executions>
		<execution>
			<id>protobuf-compile</id>
			<phase>generate-sources</phase>
			<goals>
				<goal>compile</goal>
			</goals>
			<configuration>
				<outputDirectory>src/main/generated</outputDirectory>
				<protocExecutable>${proto.executable}</protocExecutable>
			</configuration>
		</execution>
	</executions>
</plugin>

Protobuf classes are generated into src/main/generated output directory. Let’s add that source directory to maven sources with build-helper-maven-plugin.

<plugin>
	<groupId>org.codehaus.mojo</groupId>
	<artifactId>build-helper-maven-plugin</artifactId>
	<executions>
		<execution>
			<id>add-source</id>
			<phase>generate-sources</phase>
			<goals>
				<goal>add-source</goal>
			</goals>
			<configuration>
				<sources>
					<source>src/main/generated</source>
				</sources>
			</configuration>
		</execution>
	</executions>
</plugin>

Sample application source code is available on GitHub. Before proceeding to the next steps build application using mvn clean install command. Generated classes are available under src/main/generated and our microservices are ready to run. Now, let me describe some implementation details. We need two dependencies in maven pom.xml to use Protobuf.

<dependency>
	<groupId>com.google.protobuf</groupId>
	<artifactId>protobuf-java</artifactId>
	<version>3.3.1</version>
</dependency>
<dependency>
	<groupId>com.googlecode.protobuf-java-format</groupId>
	<artifactId>protobuf-java-format</artifactId>
	<version>1.4</version>
</dependency>

Then, we need to declare default HttpMessageConverter @Bean and inject it into RestTemplate @Bean.

    @Bean
    @Primary
    ProtobufHttpMessageConverter protobufHttpMessageConverter() {
        return new ProtobufHttpMessageConverter();
    }

    @Bean
    RestTemplate restTemplate(ProtobufHttpMessageConverter hmc) {
        return new RestTemplate(Arrays.asList(hmc));
    }

Here’s REST @Controller code. Account and Accounts from AccountProto generated class are returned as a response body in all three API methods visible below. All objects generated from .proto files have newBuilder method used for creating new object instances. I also set application/x-protobuf as default response content type.

@RestController
public class AccountController {

	@Autowired
	AccountRepository repository;

	protected Logger logger = Logger.getLogger(AccountController.class.getName());

	@RequestMapping(value = "/accounts/{number}", produces = "application/x-protobuf")
	public Account findByNumber(@PathVariable("number") String number) {
		logger.info(String.format("Account.findByNumber(%s)", number));
		return repository.findByNumber(number);
	}

	@RequestMapping(value = "/accounts/customer/{customer}", produces = "application/x-protobuf")
	public Accounts findByCustomer(@PathVariable("customer") Integer customerId) {
		logger.info(String.format("Account.findByCustomer(%s)", customerId));
		return Accounts.newBuilder().addAllAccount(repository.findByCustomer(customerId)).build();
	}

	@RequestMapping(value = "/accounts", produces = "application/x-protobuf")
	public Accounts findAll() {
		logger.info("Account.findAll()");
		return Accounts.newBuilder().addAllAccount(repository.findAll()).build();
	}

}

Method GET /accounts/customer/{customer} is called from customer service using @Feign client.

@FeignClient(value = "account-service")
public interface AccountClient {

    @RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}")
    Accounts getAccounts(@PathVariable("customerId") Integer customerId);

}

We can easily test described configuration using JUnit test class visible below.

@SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT)
@RunWith(SpringRunner.class)
public class AccountApplicationTest {

	protected Logger logger = Logger.getLogger(AccountApplicationTest.class.getName());

	@Autowired
	TestRestTemplate template;

	@Test
	public void testFindByNumber() {
		Account a = this.template.getForObject("/accounts/{id}", Account.class, "111111");
		logger.info("Account[\n" + a + "]");
	}

	@Test
	public void testFindByCustomer() {
		Accounts a = this.template.getForObject("/accounts/customer/{customer}", Accounts.class, "2");
		logger.info("Accounts[\n" + a + "]");
	}

	@Test
	public void testFindAll() {
		Accounts a = this.template.getForObject("/accounts", Accounts.class);
		logger.info("Accounts[\n" + a + "]");
	}

	@TestConfiguration
	static class Config {

		@Bean
		public RestTemplateBuilder restTemplateBuilder() {
			return new RestTemplateBuilder().additionalMessageConverters(new ProtobufHttpMessageConverter());
		}

	}

}

Conclusion

This article shows how to enable Protocol Buffers for microservices project based on Spring Boot. Protocol Buffer is an alternative to text-based protocols like XML or JSON and surpasses them in terms of performance. Adapt to this protocol using in Spring Boot application is pretty simple. For microservices we can still uses Spring Cloud components like Feign or Ribbon in combination with Protocol Buffers same as with REST over JSON or XML.

Circuit Breaker, Fallback and Load Balancing with Apache Camel

Apache Camel has just released a new version of their framework – 2.19. In one of my previous articles on DZone I described details about microservices support which was released in the Camel 2.18 version. There are some new features in ServiceCall EIP component, which is responsible for microservice calls. You can see example source code which is based on the sample from my article on DZone. It is available on GitHub under new branch fallback.

In the code fragment below you can see DLS route’s configuration with support for Hystrix circuit breaker, Ribbon load balancer and Consul service discovery and registration. As a service discovery in the route definition you can also use some other solutions instead of Consul like etcd (etcServiceDiscovery) or Kubernetes (kubernetesServiceDiscovery).

from("direct:account")
	.to("bean:customerService?method=findById(${header.id})")
	.log("Msg: ${body}").enrich("direct:acc", new AggregationStrategyImpl());

from("direct:acc").setBody().constant(null)
	.hystrix()
		.hystrixConfiguration()
			.executionTimeoutInMilliseconds(2000)
		.end()
	.serviceCall()
		.name("account//account")
		.component("netty4-http")
		.ribbonLoadBalancer("ribbon-1")
		.consulServiceDiscovery("http://192.168.99.100:8500")
	.end()
	.unmarshal(format)
	.endHystrix()
	.onFallback()
	.to("bean:accountFallback?method=getAccounts");

We can easily configure all Hystrix’s parameters just by calling hystrixConfiguration method. In the sample above Hystrix waits max 2 seconds for the response from remote service. In case of timeout fallback @Bean is called. Fallback @Bean implementation is really simple – it return empty list.

@Service
public class AccountFallback {

	public List<Account> getAccounts() {
		return new ArrayList<>();
	}

}

Alternatively, configuration can be implemented using object delarations. Here is service call configuration with Ribbon and Consul. Additionally, we can provide some parameters to Ribbon like client read timeout or max retry attempts. Unfortunately it seems they doesn’t work in this version of Apache Camel 🙂 (you can try to test it by yourself). I hope this will be corrected soon.

ServiceCallConfigurationDefinition def = new ServiceCallConfigurationDefinition();

ConsulConfiguration config = new ConsulConfiguration();
config.setUrl("http://192.168.99.100:8500");
config.setComponent("netty4-http");
ConsulServiceDiscovery discovery = new ConsulServiceDiscovery(config);

RibbonConfiguration c = new RibbonConfiguration();
c.addProperty("MaxAutoRetries", "0");
c.addProperty("MaxAutoRetriesNextServer", "1");
c.addProperty("ReadTimeout", "1000");
c.setClientName("ribbon-1");
RibbonServiceLoadBalancer lb = new RibbonServiceLoadBalancer(c);
lb.setServiceDiscovery(discovery);

def.setComponent("netty4-http");
def.setLoadBalancer(lb);
def.setServiceDiscovery(discovery);
context.setServiceCallConfiguration(def);

I described similar case for Spring Cloud and Netflix OSS in one of my previous article. Just like in the example presented there, I also set here a delay inside account service, which depends on the port on which the microservice was started.

@Value("${port}")
private int port;

public List<Account> findByCustomerId(Integer customerId) {
	List<Account> l = new ArrayList<>();
	l.add(new Account(1, "1234567890", 4321, customerId));
	l.add(new Account(2, "1234567891", 12346, customerId));
	if (port%2 == 0) {
		try {
			Thread.sleep(5000);
		} catch (InterruptedException e) {
			e.printStackTrace();
		}
	}
	return l;
}

Results for Spring Cloud sample were much more satisfying. The introduced configuration parameters such as read timeout for Ribbon worked and in addition Hystrix was able to automatically redirect a much smaller number of requests to slow service – only 2% of the rest to the non-blocking thread instance for 5 seconds. This shows that Apache Camel still has a few things to improve if wants to compete in microservice’s support with Sprint Cloud framework.

Spring Cloud Microservices at Pivotal Platform

Imagine you have multiple microservices running on different machines as multiple instances. It seems natural to think about the tools that helps you in the process of monitoring and managing all of them. If we add that our microservices are created based on the Spring Cloud framework obviously seems we should look at the Pivotal platform. Here is figure with platform’s architecture download from the main Pivotal’s site.

PVDI-Microservices-Architecture

Although Pivotal Platform can run applications written in many languages it has the best support for Spring Cloud Services and Netflix OSS tools like you can see in the figure above. From the possibilities offered by Pivotal we can take advantage of three ways.

Pivotal Cloud Foundry – solution can be ran on public IaaS or private cloud like AWS, Google Cloud Platform, Microsoft Azure, VMware vSphere, OpenStack.

Pivotal Web Services – hosted cloud-native platform available at pivotal.io site.

PCF Dev – the instance which can be run locally as a a single virtual machine. It offers the opportunity to develop apps using an offline environment which basic services installed like Spring Cloud Services (SCS), MySQL, Redis databases and RabbitMQ broker. If you want to run it locally with SCS you need more than 6GB RAM free.

As a Spring Cloud Services there are available Circuit Breaker (Hystrix), Service Registry (Eureka) and standard Spring Configuration Server based on git configuration.

scs

That’s all I wanted to say about the theory. Let’s move on to practice. On the Pivotal website we have detailed materials on how to set it up, create and deploy a simple microservice based on Spring Cloud solutions. In this article I will try to present the essence collected from these descriptions based on one of my standard examples from the previous posts. As always sample source code is available on GitHub. If you are interested in detailed description of the sample application, microservices and Spring Cloud read my previous articles:

Part 1: Creating microservice using Spring Cloud, Eureka and Zuul

Part 3: Creating Microservices: Circuit Breaker, Fallback and Load Balancing with Spring Cloud

If you have a lot of free RAM you can install PCF Dev on your local workstation. You need to have Virtual Box installed. Then download and install Cloud Foundry Command Line Interface (CF CLI) and PCF Dev. All is described here. Finally you can run command below and take a small break for coffee. Virtual machine needs to downloaded and started.

cf dev start -s scs

For those who do not have RAM enough (like me) there is Pivotal Web Services platform. It is available here. Before use it you have to register on Pivotal site. The rest of the article is identical for both options.
In comparison to previous examples of Spring Cloud based microservices, we need to make some changes. There is one additional dependency inside every microservice’s pom.xml.

<properties>
	...
	<spring-cloud-services.version>1.4.1.RELEASE</spring-cloud-services.version>
	<spring-cloud.version>Dalston.RELEASE</spring-cloud.version>
</properties>

<dependencies>
	<dependency>
		<groupId>io.pivotal.spring.cloud</groupId>
		<artifactId>spring-cloud-services-starter-service-registry</artifactId>
	</dependency>
	...
</dependencies>

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>${spring-cloud.version}</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
		<dependency>
			<groupId>io.pivotal.spring.cloud</groupId>
			<artifactId>spring-cloud-services-dependencies</artifactId>
			<version>${spring-cloud-services.version}</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

We also use Maven Cloud Foundry plugin cf-maven-plugin for application deployment on Pivotal platform. Here is sample for account-service. We run two instances of that microservice with max memory 512MB. Our application name is piomin-account-service.

<plugin>
	<groupId>org.cloudfoundry</groupId>
	<artifactId>cf-maven-plugin</artifactId>
	<version>1.1.3</version>
	<configuration>
		<target>http://api.run.pivotal.io</target>
		<org>piotrminkowski</org>
		<space>development</space>
		<appname>piomin-account-service</appname>
		<memory>512</memory>
		<instances>2</instances>
		<server>cloud-foundry-credentials</server>
	</configuration>
</plugin>

Don’t forget to add credentials configuration into Maven settings.xml file.

<server>
	<id>cloud-foundry-credentials</id>
	<username>piotr.minkowski@gmail.com</username>
	<password>***</password>
</server>

Now, when building sample application we to append cf:push command.

mvn clean install cf:push

Here is circuit breaker implementation inside customer-service.

@Service
public class AccountService {

	@Autowired
	private AccountClient client;

	@HystrixCommand(fallbackMethod = "getEmptyList")
	public List<Account> getAccounts(Integer customerId) {
		return client.getAccounts(customerId);
	}

	List<Account> getEmptyList(Integer customerId) {
		return new ArrayList<>();
	}

}

There is randomly generated delay on the account’s service side, so 25% of calls circuit breaker should be activated.

@RequestMapping("/accounts/customer/{customer}")
public List<Account> findByCustomer(@PathVariable("customer") Integer customerId) {
	logger.info(String.format("Account.findByCustomer(%s)", customerId));
	Random r = new Random();
	int rr = r.nextInt(4);
	if (rr == 1) {
		try {
			Thread.sleep(2000);
		} catch (InterruptedException e) {
			e.printStackTrace();
		}
	}
	return accounts.stream().filter(it -> it.getCustomerId().intValue() == customerId.intValue())
		.collect(Collectors.toList());
}

After successfully deploying application using Maven cf:push command we can go to Pivotal Web Services console available at https://console.run.pivotal.io/. Here are our two deployed services: two instances of piomin-account-service and one instance of piomin-customer-service.

pivotal-1

I have also activated Circuit Breaker and Service Registry from Marketplace.

pivotal-2

Every application need to be bound to service. To enable it select service, then expand Bound Apps overlap and select checkbox next to each service name.

pivotal-4

After this step applications needs to be restarted. It also can be be using web dashboard inside each service.

pivotal-5

Finally, all services are registered in Eureka and we can perform some tests using customer endpoint https://piomin-customer-service.cfapps.io/customers/{id}.

pivotal-4

Final words

With Pivotal solution we can easily deploy, scale and monitor our microservices. Deployment and scaling can be done using Maven plugin or via web dashboard. On Pivotal there are also available some services prepared especially for microservices needs like service registry, circuit breaker and configuration server. Pivotal is a competition for such solutions like Kubernetes which based on Docker containerization (more about this tools here). It is especially useful if you are creating a microservices based on Spring Boot and Spring Cloud frameworks.

Part 3: Creating Microservices: Circuit Breaker, Fallback and Load Balancing with Spring Cloud

Probably you read some articles about Hystrix and you know in what purpose it is used for. Today I would like to show you an example of exactly how to use it, which gives you the ability to combine with other tools from Netflix OSS stack like Feign and Ribbon. In this I assume that you have basic knowledge on topics such as microservices, load balancing, service discovery. If not I suggest you read some articles about it, for example my short introduction to microservices architecture available here: Part 1: Creating microservice using Spring Cloud, Eureka and Zuul. The code sample used in that article is also also used now. There is also sample source code available on GitHub. For the sample described now see hystrix branch, for basic sample master branch. 

Let’s look at some scenarios for using fallback and circuit breaker. We have Customer Service which calls API method from Account Service. There two running instances of Account Service. The requests to Account Service instances are load balanced by Ribbon client 50/50.

micro-details-1

Scenario 1

Hystrix is disabled for Feign client (1), auto retries mechanism is disabled for Ribbon client on local instance (2) and other instances (3). Ribbon read timeout is shorter than request max process time (4). This scenario also occurs with the default Spring Cloud configuration without Hystrix. When you call customer test method you sometimes receive full response and sometimes 500 HTTP error code (50/50).

ribbon:
  eureka:
    enabled: true
  MaxAutoRetries: 0 #(2)
  MaxAutoRetriesNextServer: 0 #(3)
  ReadTimeout: 1000 #(4)

feign:
  hystrix:
    enabled: false #(1)

Scenario 2

Hystrix is still disabled for Feign client (1), auto retries mechanism is disabled for Ribbon client on local instance (2) but enabled on other instances once (3). You always receive full response. If your request is received by instance with delayed response it is timed out after 1 second and then Ribbon calls another instance – in that case not delayed. You can always change MaxAutoRetries to positive value but gives us nothing in that sample.

ribbon:
  eureka:
    enabled: true
  MaxAutoRetries: 0 #(2)
  MaxAutoRetriesNextServer: 1 #(3)
  ReadTimeout: 1000 #(4)

feign:
  hystrix:
    enabled: false #(1)

Scenario 3

Here is not a very elegant solution to the problem. We set ReadTimeout on value bigger than delay inside API method (5000 ms).

ribbon:
  eureka:
    enabled: true
  MaxAutoRetries: 0
  MaxAutoRetriesNextServer: 0
  ReadTimeout: 10000

feign:
  hystrix:
    enabled: false

Generally configuration from Scenario 2 and 3 is right, you always get the full response. But in some cases you will wait more than 1 second (Scenario 2) or more than 5 seconds (Scenario 3) and delayed instance receives 50% requests from Ribbon client. But fortunately there is Hystrix – circuit breaker.

Scenario 4

Let’s enable Hystrix just by removing feign property. There is no auto retries for Ribbon client (1) and its read timeout (2) is bigger than Hystrix’s timeout (3). 1000ms is also default value for Hystrix timeoutInMilliseconds property. Hystrix circuit breaker and fallback will work for delayed instance of account service. For some first requests you receive fallback response from Hystrix. Then delayed instance will be cut off from requests, most of them will be directed to not delayed instance.

ribbon:
  eureka:
    enabled: true
  MaxAutoRetries: 0 #(1)
  MaxAutoRetriesNextServer: 0
  ReadTimeout: 2000 #(2)

hystrix:
  command:
    default:
      execution:
        isolation:
          thread:
            timeoutInMilliseconds: 1000 #(3)

Scenario 5

This scenario is a more advanced development of Scenario 4. Now Ribbon timeout (2) is lower than Hystrix timeout (3) and also auto retries mechanism is enabled (1) for local instance and for other instances (4). The result is same as for Scenario 2 and 3 – you receive full response, but Hystrix is enabled and it cuts off delayed instance from future requests.

ribbon:
  eureka:
    enabled: true
  MaxAutoRetries: 3 #(1)
  MaxAutoRetriesNextServer: 1 #(4)
  ReadTimeout: 1000 #(2)

hystrix:
  command:
    default:
      execution:
        isolation:
          thread:
            timeoutInMilliseconds: 10000 #(3)

I could imagine a few other scenarios. But the idea was just a show differences in circuit breaker and fallback when modifying configuration properties for Feign, Ribbon and Hystrix in application.yml.

Hystrix

Let’s take a closer look on standard Hystrix circuit breaker and  usage described in Scenario 4. To enable Hystrix in your Spring Boot application you have to following dependencies to pom.xml. Second step is to add annotation @EnableCircuitBreaker to main application class and also @EnableHystrixDashboard if you would like to have UI dashboard available.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-hystrix</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-hystrix-dashboard</artifactId>
</dependency>

Hystrix fallback is set on Feign client inside customer service.

@FeignClient(value = "account-service", fallback = AccountFallback.class)
public interface AccountClient {

    @RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}")
    List<Account> getAccounts(@PathVariable("customerId") Integer customerId);

}

Fallback implementation is really simple. In this case I just return empty list instead of customer’s account list received from account service.

@Component
public class AccountFallback implements AccountClient {

	@Override
	public List<Account> getAccounts(Integer customerId) {
		List<Account> acc = new ArrayList<Account>();
		return acc;
	}

}

Now, we can perform some tests. Let’s start discovery service, two instances of account service on different ports (-DPORT VM argument during startup) and customer service. Endpoint for tests is /customers/{id}. There is also JUnit test class which sends multiple requests to this enpoint available in customer-service module pl.piomin.microservices.customer.ApiTest.

	@RequestMapping("/customers/{id}")
	public Customer findById(@PathVariable("id") Integer id) {
		logger.info(String.format("Customer.findById(%s)", id));
		Customer customer = customers.stream().filter(it -> it.getId().intValue()==id.intValue()).findFirst().get();
		List<Account> accounts =  accountClient.getAccounts(id);
		customer.setAccounts(accounts);
		return customer;
	}

I enabled Hystrix Dashboard on account-service main class. If you would like to access it call from your web browser http://localhost:2222/hystrix address and then type Hystrix’s stream address from customer-service http://localhost:3333/hystrix.stream. When I run test that sends 1000 requests to customer service about 20 (2%) of them were forwarder to delayed instance of account service, remaining to not delayed instance. Hystrix dashboard during that test is visible below. For more advanced Hystrix configuration refer to its documentation available here.

hystrix-1

Testing Java Microservices

While developing a new application we should never forget about testing. This term seems to be particularly important when working with microservices. Microservices testing requires different approach than tests designing for monolithic applications. As far as monolithic testing is concerned, the main focus is put on unit testing and also in most cases integration tests with the database layer. In the case of microservices, the most important test seems to be interactions between those microservices. Although every microservice is independently developed and released the change in one of them can affect on all which are interacting with that service. Interaction between them is realized by messages. Usually these are messages send via REST or AMQP protocols.

We can divide five different layers of microservices tests. The first three of them are same as for monolith applications.

Unit tests – we are testing the smallest pieces of code, for example single method or component and mocking every call of other methods or components. There are many popular frameworks that supporting unit tests in java like JUnit, TestNG and Mockito for mocking. The main task of this type of testing is to confirm that the implementation meets the requirements.

Integration tests – we are testing interaction and communication between components basing on their interfaces with external services mocked out.

End-to-end test – also known as functional tests. The main goal of that tests is to verify if the system meets the external requirements. It means that we should design test scenarios which test all the microservices take a part in that process.

Contract tests – test at the boundary of an external service verifying that it meets the contract expected by a consuming service

Component tests – limits the scope of the exercised software to a portion of the system under test, manipulating the system through internal code interfaces and using test doubles to isolate the code under test from other components.

In the figure below we can see the component diagram of the one sample microservice (customer service). That architecture is similar for all other sample microservices described in that post. Customer service is interacting with Mongo database and storing there all customers. Mapped between object and database is realized by Spring Data @Document. We also use @Repository component as a DAO for Customer entity. Communication with other microservices is realized by @Feign REST client. Customer service collects all customer’s accounts and products from external microservices. @Repository and @Feign clients are injected into the @Controller which is exposed outside via REST resource.

testingmicroservices1

In this article I’ll show you contract and component tests for sample microservices architecture. In the figure below you can see test strategy for architecture showed in previous picture. For our tests we use embedded in-memory Mongo database and RESTful stubs generated with Spring Cloud Contract framework.

testingmicroservices2

Now, let’s take a look on the big picture. We have four microservices interacting with each other like we see in the figure below. Spring Cloud Contract uses WireMock in the backgroud for recording and matching requests and responses. For testing purposes Eureka discovering on all microservices needs to be disabled.

testingmicroservices3

Sample application source code is available on GitHub. All microservices are basing on Spring Boot and Spring Cloud (Eureka, Zuul, Feign, Ribbon) frameworks. Interaction with Mongo database is realized with Spring Data MongoDB (spring-boot-starter-data-mongodb dependency in pom.xml) library. DAO is really simple. It extends MongoRepository CRUD component. @Repository and @Feign clients are injected into CustomerController.

public interface CustomerRepository extends MongoRepository<Customer, String> {

	public Customer findByPesel(String pesel);
	public Customer findById(String id);

}

Here’s full controller code.

@RestController
public class CustomerController {

	@Autowired
	private AccountClient accountClient;
	@Autowired
	private ProductClient productClient;

	@Autowired
	CustomerRepository repository;

	protected Logger logger = Logger.getLogger(CustomerController.class.getName());

	@RequestMapping(value = "/customers/pesel/{pesel}", method = RequestMethod.GET)
	public Customer findByPesel(@PathVariable("pesel") String pesel) {
		logger.info(String.format("Customer.findByPesel(%s)", pesel));
		return repository.findByPesel(pesel);
	}

	@RequestMapping(value = "/customers", method = RequestMethod.GET)
	public List<Customer> findAll() {
		logger.info("Customer.findAll()");
		return repository.findAll();
	}

	@RequestMapping(value = "/customers/{id}", method = RequestMethod.GET)
	public Customer findById(@PathVariable("id") String id) {
		logger.info(String.format("Customer.findById(%s)", id));
		Customer customer = repository.findById(id);
		List<Account> accounts =  accountClient.getAccounts(id);
		logger.info(String.format("Customer.findById(): %s", accounts));
		customer.setAccounts(accounts);
		return customer;
	}

	@RequestMapping(value = "/customers/withProducts/{id}", method = RequestMethod.GET)
	public Customer findWithProductsById(@PathVariable("id") String id) {
		logger.info(String.format("Customer.findWithProductsById(%s)", id));
		Customer customer = repository.findById(id);
		List<Product> products =  productClient.getProducts(id);
		logger.info(String.format("Customer.findWithProductsById(): %s", products));
		customer.setProducts(products);
		return customer;
	}

	@RequestMapping(value = "/customers", method = RequestMethod.POST)
	public Customer add(@RequestBody Customer customer) {
		logger.info(String.format("Customer.add(%s)", customer));
		return repository.save(customer);
	}

	@RequestMapping(value = "/customers", method = RequestMethod.PUT)
	public Customer update(@RequestBody Customer customer) {
		logger.info(String.format("Customer.update(%s)", customer));
		return repository.save(customer);
	}

}

To replace external Mongo database with embedded in-memory instance during automated tests we only have to add following dependency to pom.xml.

<dependency>
	<groupId>de.flapdoodle.embed</groupId>
	<artifactId>de.flapdoodle.embed.mongo</artifactId>
	<scope>test</scope>
</dependency>

If we using different addresses and connection credentials also application seetings should be overriden in src/test/resources. Here’s application.yml file for testing. In the bottom there is a configuration for disabling Eureka discovering.

server:
  port: ${PORT:3333}

spring:
  application:
    name: customer-service
  data:
    mongodb:
      host: localhost
      port: 27017
  logging:
    level:
      org.springframework.cloud.contract: TRACE

eureka:
  client:
    enabled: false

In-memory MongoDB instance is started automatically during Spring Boot JUnit test. The next step is to add Spring Cloud Contract dependencies.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-contract-verifier</artifactId>
	<scope>test</scope>
</dependency>

To enable automated tests generation by Spring Cloud Contract we also have to add following plugin into pom.xml.

<plugin>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-contract-maven-plugin</artifactId>
	<version>1.1.0.RELEASE</version>
	<extensions>true</extensions>
	<configuration>
			<packageWithBaseClasses>pl.piomin.microservices.advanced.customer.api</packageWithBaseClasses>
	</configuration>
</plugin>

Property packageWithBaseClasses defines package where base classes extended by generated test classes are stored. Here’s base test class for account service tests. In our sample architecture account service is only a produces it does not consume any services.

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {Application.class})
public class ApiScenario1Base {

	@Autowired
	private WebApplicationContext context;

	@Before
	public void setup() {
		RestAssuredMockMvc.webAppContextSetup(context);
	}

}

As opposed to the account service customer service consumes some services for collecting customer’s account and products. That’s why base test class for customer service needs to define stub artifacts data.

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {Application.class})
@AutoConfigureStubRunner(ids = {"pl.piomin:account-service:+:stubs:2222"}, workOffline = true)
public class ApiScenario1Base {

	@Autowired
	private WebApplicationContext context;

	@Before
	public void setup() {
		RestAssuredMockMvc.webAppContextSetup(context);
	}

}

Test classes are generated on the basis of contracts defined in src/main/resources/contracts. Such contracts can be implemented using Groovy language. Here’s sample contract for adding new account.

org.springframework.cloud.contract.spec.Contract.make {
  request {
    method 'POST'
    url '/accounts'
	body([
	  id: "1234567890",
          number: "12345678909",
          balance: 1234,
	  customerId: "123456789"
	])
	headers {
	  contentType('application/json')
	}
  }
response {
  status 200
  body([
    id: "1234567890",
    number: "12345678909",
    balance: 1234,
    customerId: "123456789"
  ])
  headers {
    contentType('application/json')
  }
 }
}

Test class are generated under target/generated-test-sources catalog. Here’s generated class for the code above.

@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class Scenario1Test extends ApiScenario1Base {

	@Test
	public void validate_1_postAccount() throws Exception {
		// given:
			MockMvcRequestSpecification request = given()
					.header("Content-Type", "application/json")
					.body("{\"id\":\"1234567890\",\"number\":\"12345678909\",\"balance\":1234,\"customerId\":\"123456789\"}");

		// when:
			ResponseOptions response = given().spec(request)
					.post("/accounts");

		// then:
			assertThat(response.statusCode()).isEqualTo(200);
			assertThat(response.header("Content-Type")).matches("application/json.*");
		// and:
			DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
			assertThatJson(parsedJson).field("id").isEqualTo("1234567890");
			assertThatJson(parsedJson).field("number").isEqualTo("12345678909");
			assertThatJson(parsedJson).field("balance").isEqualTo(1234);
			assertThatJson(parsedJson).field("customerId").isEqualTo("123456789");
	}

	@Test
	public void validate_2_postAccount() throws Exception {
		// given:
			MockMvcRequestSpecification request = given()
					.header("Content-Type", "application/json")
					.body("{\"id\":\"1234567891\",\"number\":\"12345678910\",\"balance\":4675,\"customerId\":\"123456780\"}");

		// when:
			ResponseOptions response = given().spec(request)
					.post("/accounts");

		// then:
			assertThat(response.statusCode()).isEqualTo(200);
			assertThat(response.header("Content-Type")).matches("application/json.*");
		// and:
			DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
			assertThatJson(parsedJson).field("id").isEqualTo("1234567891");
			assertThatJson(parsedJson).field("customerId").isEqualTo("123456780");
			assertThatJson(parsedJson).field("number").isEqualTo("12345678910");
			assertThatJson(parsedJson).field("balance").isEqualTo(4675);
	}

	@Test
	public void validate_3_getAccounts() throws Exception {
		// given:
			MockMvcRequestSpecification request = given();

		// when:
			ResponseOptions response = given().spec(request)
					.get("/accounts");

		// then:
			assertThat(response.statusCode()).isEqualTo(200);
			assertThat(response.header("Content-Type")).matches("application/json.*");
		// and:
			DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
			assertThatJson(parsedJson).array().contains("balance").isEqualTo(1234);
			assertThatJson(parsedJson).array().contains("customerId").isEqualTo("123456789");
			assertThatJson(parsedJson).array().contains("id").matches("[0-9]{10}");
			assertThatJson(parsedJson).array().contains("number").isEqualTo("12345678909");
	}

}

In the generated class there are three JUnit tests because I used scenario mechanisms available in Spring Cloud Contract. There are three groovy files inside scenario1 catalog like we can see in the picture below. The number in every file’s prefix defines tests order. Second scenario has only one definition file and is also used in the customer service (findById API method). Third scenario has four definition files and is used in the transfer service (execute API method).

scenarios

Like I mentioned before interaction between microservices is realized by @FeignClient. WireMock used by Spring Cloud Contract records request/response defined in scenario2 inside account service. Then recorded interaction is used by @FeignClient during tests instead of calling real service which is not available.

@FeignClient("account-service")
public interface AccountClient {

	@RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}", consumes = {MediaType.APPLICATION_JSON_VALUE})
	List<Account> getAccounts(@PathVariable("customerId") String customerId);

}

All the tests are generated and run during Maven build, for example mvn clean install command. If you are interested in more details and features of Spring Cloud Contract you can it here.

Finally, we can define Continuous Integration pipeline for our microservices. Each of them should be build independently. More about Continuous Integration / Continuous Delivery environment could be read in one of previous post How to setup Continuous Delivery environment. Here’s sample pipeline created with Jenkins Pipeline Plugin for account service. In Checkout stage we are updating our source code working for the newest version from repository. In the Build stage we are starting from checking out project version set inside pom.xml, then we build application using mvn clean install command. Finally, we are recording unit tests result using junit pipeline method. Same pipelines can be configured for all other microservices. In described sample all microservices are placed in the same Git repository with one Maven version for simplicity. But we can imagine that every microservice could be inside different repository with independent version in pom.xml. Tests will always be run with the newest version of stubs, which is set in that fragment of base test class with +: @AutoConfigureStubRunner(ids = {“pl.piomin:account-service:+:stubs:2222”}, workOffline = true)

node {

    withMaven(maven: 'Maven') {

        stage ('Checkout') {
            git url: 'https://github.com/piomin/sample-spring-microservices-advanced.git', credentialsId: 'github-piomin', branch: 'testing'
        }

        stage ('Build') {
            def pom = readMavenPom file: 'pom.xml'
            def version = pom.version.replace("-SNAPSHOT", ".${currentBuild.number}")
            env.pom_version = version
            print 'Build version: ' + version
            currentBuild.description = "v${version}"

            dir('account-service') {
                bat "mvn clean install -Dmaven.test.failure.ignore=true"
            }

            junit '**/target/surefire-reports/TEST-*.xml'
        }

    }

}

Here’s pipeline vizualization on Jenkins Management Dashboard.

account-pipeline

 

Microservices API Documentation with Swagger2

Swagger is the most popular tool for designing, building and documenting RESTful APIs. It has nice integration with Spring Boot. To use it in conjunction with Spring we need to add following two dependencies to Maven pom.xml.

<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.6.1</version>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger-ui</artifactId>
	<version>2.6.1</version>
</dependency>

Swagger configuration for single Spring Boot service is pretty simple. The level of complexity is greater if you want to create one documentation for several separated microservices. Such documentation should be available on API gateway. In the picture below you can see the architecture of our sample solution.

swagger

First, we should configure Swagger on every microservice. To enable it we have to declare @EnableSwagger2 on the main class. API documentation will be automatically generated from source code by Swagger library during application startup. The process is controlled by Docket @Bean which is also declared in the main class. API version is read from pom.xml file using MavenXpp3Reader. We also set some other properties like title, author and description using apiInfo method. By default, Swagger generates documentation for all REST services including those created by Spring Boot. We would like to limit documentation only to our @RestController located inside pl.piomin.microservices.advanced.account.api package.

    @Bean
    public Docket api() throws IOException, XmlPullParserException {
        MavenXpp3Reader reader = new MavenXpp3Reader();
        Model model = reader.read(new FileReader("pom.xml"));
        return new Docket(DocumentationType.SWAGGER_2)
          .select()
          .apis(RequestHandlerSelectors.basePackage("pl.piomin.microservices.advanced.account.api"))
          .paths(PathSelectors.any())
          .build().apiInfo(new ApiInfo("Account Service Api Documentation", "Documentation automatically generated", model.getParent().getVersion(), null, new Contact("Piotr Mińkowski", "piotrminkowski.wordpress.com", "piotr.minkowski@gmail.com"), null, null));
}

Here’s our API RESTful controller.

@RestController
public class AccountController {

	@Autowired
	AccountRepository repository;

	protected Logger logger = Logger.getLogger(AccountController.class.getName());

	@RequestMapping(value = "/accounts/{number}", method = RequestMethod.GET)
	public Account findByNumber(@PathVariable("number") String number) {
		logger.info(String.format("Account.findByNumber(%s)", number));
		return repository.findByNumber(number);
	}

	@RequestMapping(value = "/accounts/customer/{customer}", method = RequestMethod.GET)
	public List findByCustomer(@PathVariable("customer") String customerId) {
		logger.info(String.format("Account.findByCustomer(%s)", customerId));
		return repository.findByCustomerId(customerId);
	}

	@RequestMapping(value = "/accounts", method = RequestMethod.GET)
	public List findAll() {
		logger.info("Account.findAll()");
		return repository.findAll();
	}

	@RequestMapping(value = "/accounts", method = RequestMethod.POST)
	public Account add(@RequestBody Account account) {
		logger.info(String.format("Account.add(%s)", account));
		return repository.save(account);
	}

	@RequestMapping(value = "/accounts", method = RequestMethod.PUT)
	public Account update(@RequestBody Account account) {
		logger.info(String.format("Account.update(%s)", account));
		return repository.save(account);
	}

}

The similar Swagger’s configuration exists on every microservice. API documentation is available under http://localhost:/swagger-ui.html. Now, we would like to enable one documentation embedded on the gateway for all microservices. Here’s Spring @Component implementing SwaggerResourcesProvider interface which overrides default provider configuration exists in Spring context.

@Component
@Primary
@EnableAutoConfiguration
public class DocumentationController implements SwaggerResourcesProvider {

	@Override
	public List get() {
		List resources = new ArrayList<>();
		resources.add(swaggerResource("account-service", "/api/account/v2/api-docs", "2.0"));
		resources.add(swaggerResource("customer-service", "/api/customer/v2/api-docs", "2.0"));
		resources.add(swaggerResource("product-service", "/api/product/v2/api-docs", "2.0"));
		resources.add(swaggerResource("transfer-service", "/api/transfer/v2/api-docs", "2.0"));
		return resources;
	}

	private SwaggerResource swaggerResource(String name, String location, String version) {
		SwaggerResource swaggerResource = new SwaggerResource();
		swaggerResource.setName(name);
		swaggerResource.setLocation(location);
		swaggerResource.setSwaggerVersion(version);
		return swaggerResource;
	}

}

All microservices api-docs are added as Swagger resources. The location address is proxied via Zuul gateway. Here’s gateway route configuration.

zuul:
  prefix: /api
  routes:
    account:
      path: /account/**
      serviceId: account-service
    customer:
      path: /customer/**
      serviceId: customer-service
    product:
      path: /product/**
      serviceId: product-service
    transfer:
      path: /transfer/**
      serviceId: transfer-service

Now, API documentation is available under gateway address http://localhost:8765/swagger-ui.html. You can see how it looks for account service in the picture below. We can select source service in the combo box placed inside title panel.

swagger-1

Documentation appearence can be easily customized by providing UIConfiguration @Bean. In the code below I changed default operations expansion level by setting “list” as a second constructor parameter – docExpansion.

	@Bean
	UiConfiguration uiConfig() {
		return new UiConfiguration("validatorUrl", "list", "alpha", "schema",
				UiConfiguration.Constants.DEFAULT_SUBMIT_METHODS, false, true, 60000L);
	}

You can expand every operation to see the details. Every operation can be test by providing required parameters and clicking Try it out! button.

swagger-2

swagger-3

Sample application source code is available on GitHub.

Part 2: Creating microservices – monitoring with Spring Cloud Sleuth, ELK and Zipkin

One of the most frequently mentioned challenges related to the creation of microservices based architecture is monitoring. Each microservice should be run on an environment isolated from the other microservices, so it does not share resources such as databases or log files with them. However, the essential requirement for microservices architecture is relatively easy to access the call history, including the ability to look through the request propagation between multiple microservices. Grepping the logs is not the right solution for that problem. There are some helpful tools which can be used when creating microservices with Spring Boot and Spring Cloud frameworks.

Spring Cloud Sleuth – library available as a part of Spring Cloud project. Lets you track the progress of subsequent microservices by adding the appropriate headers to the HTTP requests. The library is based on the MDC (Mapped Diagnostic Context) concept, where you can easily extract values put to context and display them in the logs.

Zipkin – distributed tracing system that helps to gather timing data for every request propagated between independent services. It has simple management console where we can find visualization of the time statistics generated by subsequent services.

ELK – Elasticsearch, Logstash, Kibana: three different tools usually used together. They are used for searching, analyzing, and visualizing log data in a real time.

Probably many of you, even if you have not had a touch with Java or microservices before, heard about Logstash and Kibana. For example, if you look at the hub.docker.com among the most popular images you will find the ones for the above tools. In our example, we will just use those images. Let’s begin from running container with Elasticsearch.

docker run -d -it --name es -p 9200:9200 -p 9300:9300 elasticsearch

The we can run Kibana container and link it to the Elasticsearch.

docker run -d -it --name kibana --link es:elasticsearch -p 5601:5601 kibana

At the end we will start Logstash with input and output declared. As an input we declare TCP which is compatible with LogstashTcpSocketAppender used as a logging appender in our sample application. As an output elasticsearch has been declared. Each microservice will be indexed on its name with micro prefix.

docker run -d -it --name logstash -p 5000:5000 logstash -e 'input { tcp { port => 5000 codec => "json" } } output { elasticsearch { hosts => ["192.168.99.100"] index => "micro-%{serviceName}"} }'

Now we can take a look on sample microservices. This post is a continuation of my previous article Part 1: Creating microservice using Spring Cloud, Eureka and Zuul. Architecture and exposed services are the same as in the previous sample. Source code is available on GitHub (branch logstash). Like a mentioned before we will use Logback library for sending log data to Logstash. In addition to the three Logback dependencies we also add libraries for Zipkin integration and Spring Cloud Sleuth starter. Here’s fragment of pom.xml for microservice.

		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-starter-sleuth</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-sleuth-zipkin</artifactId>
		</dependency>
		<dependency>
			<groupId>net.logstash.logback</groupId>
			<artifactId>logstash-logback-encoder</artifactId>
			<version>4.9</version>
		</dependency>
		<dependency>
			<groupId>ch.qos.logback</groupId>
			<artifactId>logback-classic</artifactId>
			<version>1.2.3</version>
		</dependency>
		<dependency>
			<groupId>ch.qos.logback</groupId>
			<artifactId>logback-core</artifactId>
			<version>1.2.3</version>
		</dependency>

There is also Logback configuration file in src/main/resources directory. Here’s logback.xml fragment. We can configure which logging field are sending to Logstash by declaring tags mdc, logLevel, message etc. We are also appending service name field for elasticsearch index creation.

	<appender name="STASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
		<destination>192.168.99.100:5000</destination>

		<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
			<providers>
				<mdc />
				<context />
				<logLevel />
				<loggerName />

				<pattern>
					<pattern>
						{
						"serviceName": "account-service"
						}
					</pattern>
				</pattern>

				<threadName />
				<message />
				<logstashMarkers />
				<stackTrace />
			</providers>
		</encoder>
	</appender>

The configuration of Spring Cloud Sleuth is very simple. We only have to add spring-cloud-starter-sleuth dependency to pom.xml and declare sampler @Bean . In the sample I declared AlwaysSampler that exports every span, but there is also an other other option – PercentageBasedSampler that samples a fixed fraction of spans.

	@Bean
	public AlwaysSampler defaultSampler() {
	  return new AlwaysSampler();
	}

After starting ELK docker containers we need to run our microservices. There are 5 Spring Boot applications which need to be run discovery-service, account-service, customer-service, gateway-service and zipkin-service. After launching all of them we can try call some services, for example http://localhost:8765/api/customer/customers/{id}, which causes calling of both customer and account service. All logs will be stored in elasticsearch with micro-%{serviceName} index. They can be searched in Kibana with micro-* index pattern. Index patterns are created in Kibana under section Management -> Index patterns. Kibana is available under address http://192.168.99.100:5601. After first running we will be prompt for index pattern, so let’s type micro-*. Under Discover section we can take o look on all logs matched typed pattern with timeline visualization.

kibana2

Kibana is rather intuitive and user friendly tool. I will not describe in the details how to use Kibana, because you can easily find it out by yourself reading a documentation or just clicking UI. The most important thing is to be able to search a logs by filtering criteria. In the picture below there is an example of searching logs by X-B3-TraceId field, which is add to the request header by Spring Cloud Sleuth. Sleuth also adds X-B3-SpanId for marking request for single microservice. We can select which field are displayed in the result list – in this sample I selected message and serviceName like you see in the left pane of the picture.

kibana1

Here’s a picture with single request details. It is visible after expanding each log row.

kibana3

Spring Cloud Sleuth also sends statistics to Zipkin. That is another kind of data than is stored in Logstash. These are timing statistics for each request. Zipkin UI is really simple. You can filter the requests by some criteria like time, service name, endpoint name. Here’s picture with same requests which were visualized with Kibana: http://localhost:8765/api/customer/customers/{id}.

zipkin-1

We can always see the details of each request by clicking on it. Then you see the picture similar to visible below. In the beginning, the request has been processed on API gateway. Then gateway discovered customer service on Eureka server and called that service. Customer service also has to discover account service and then call it. In this view you can easily find out which operation is the most time consuming.

zipkin-3